Using paradata to investigate food reporting patterns in AMPM
USDA-ARS?s Scientific Manuscript database
The USDA Automated Multiple Pass Method (AMPM) Blaise instrument collects 24-hour dietary recalls for the What We Eat In America, National Health and Nutrition Examination Survey. Each year it is used in approximately 10,000 interviews which ask individuals to recall the foods and beverages that we...
Zimmerman, Thea Palmer; Hull, Stephen G; McNutt, Suzanne; Mittl, Beth; Islam, Noemi; Guenther, Patricia M; Thompson, Frances E; Potischman, Nancy A; Subar, Amy F
2009-12-01
The National Cancer Institute (NCI) is developing an automated, self-administered 24-hour dietary recall (ASA24) application to collect and code dietary intake data. The goal of the ASA24 development is to create a web-based dietary interview based on the US Department of Agriculture (USDA) Automated Multiple Pass Method (AMPM) instrument currently used in the National Health and Nutrition Examination Survey (NHANES). The ASA24 food list, detail probes, and portion probes were drawn from the AMPM instrument; portion-size pictures from Baylor College of Medicine's Food Intake Recording Software System (FIRSSt) were added; and the food code/portion code assignments were linked to the USDA Food and Nutrient Database for Dietary Studies (FNDDS). The requirements that the interview be self-administered and fully auto-coded presented several challenges as the AMPM probes and responses were linked with the FNDDS food codes and portion pictures. This linking was accomplished through a "food pathway," or the sequence of steps that leads from a respondent's initial food selection, through the AMPM probes and portion pictures, to the point at which a food code and gram weight portion size are assigned. The ASA24 interview database that accomplishes this contains more than 1,100 food probes and more than 2 million food pathways and will include about 10,000 pictures of individual foods depicting up to 8 portion sizes per food. The ASA24 will make the administration of multiple days of recalls in large-scale studies economical and feasible.
The radiobrightness thermal inertia measure of soil moisture
NASA Technical Reports Server (NTRS)
England, Anthony W.; Galantowicz, John F.; Schretter, Mindy S.
1992-01-01
Radiobrightness thermal inertia (RTI) is proposed as a method for using day-night differences in satellite-sensed radiobrightness to monitor the moisture of Great Plains soils. Diurnal thermal and radiobrightness models are used to examine the sensitivity of the RTI method. Model predictions favor use of the 37.0 and 85.5 GHz, H-polarized channels of the Special Sensor Microwave/Imager (SSM/I). The model further predicts that overflight times near 2:00 AM/PM would be nearly optimal for RTI, that midnight/noon and 4:00 AM/PM are nearly as good, but that the 6:00 AM/PM overflight times of the current SSM/I are particularly poor. Data from the 37.0 GHz channel of the Scanning Multichannel Microwave Radiometer (SMMR) are used to demonstrate that the method is plausible.
Kirkpatrick, Sharon I; Subar, Amy F; Douglass, Deirdre; Zimmerman, Thea P; Thompson, Frances E; Kahle, Lisa L; George, Stephanie M; Dodd, Kevin W; Potischman, Nancy
2014-07-01
The Automated Self-Administered 24-hour Recall (ASA24), a freely available Web-based tool, was developed to enhance the feasibility of collecting high-quality dietary intake data from large samples. The purpose of this study was to assess the criterion validity of ASA24 through a feeding study in which the true intake for 3 meals was known. True intake and plate waste from 3 meals were ascertained for 81 adults by inconspicuously weighing foods and beverages offered at a buffet before and after each participant served him- or herself. Participants were randomly assigned to complete an ASA24 or an interviewer-administered Automated Multiple-Pass Method (AMPM) recall the following day. With the use of linear and Poisson regression analysis, we examined the associations between recall mode and 1) the proportions of items consumed for which a match was reported and that were excluded, 2) the number of intrusions (items reported but not consumed), and 3) differences between energy, nutrient, food group, and portion size estimates based on true and reported intakes. Respondents completing ASA24 reported 80% of items truly consumed compared with 83% in AMPM (P = 0.07). For both ASA24 and AMPM, additions to or ingredients in multicomponent foods and drinks were more frequently omitted than were main foods or drinks. The number of intrusions was higher in ASA24 (P < 0.01). Little evidence of differences by recall mode was found in the gap between true and reported energy, nutrient, and food group intakes or portion sizes. Although the interviewer-administered AMPM performed somewhat better relative to true intakes for matches, exclusions, and intrusions, ASA24 performed well. Given the substantial cost savings that ASA24 offers, it has the potential to make important contributions to research aimed at describing the diets of populations, assessing the effect of interventions on diet, and elucidating diet and health relations. This trial was registered at clinicaltrials.gov as NCT00978406. © 2014 American Society for Nutrition.
Estimation of the passing of four consecutive hours.
NASA Technical Reports Server (NTRS)
Webb, W. B.; Ross, W.
1972-01-01
In the AM and PM (9 to 1) males and females gave estimates of the hourly passing of time for 4 hr. There were no differences between sexes or AM/PM estimates. The group was less than 1 min off after an hour and 12 min off after 4 hr. There was a wide range of individual differences. One-fourth of the subjects were within an error of 10 min after 4 hr whereas another one-fourth were off more than 50 min. The accuracy of estimates was about equal to accuracy of awakening from sleep to randomly chosen awakening times.
Tirosh, A; Lodish, M B; Papadakis, G Z; Lyssikatos, C; Belyavskaya, E; Stratakis, C A
2016-09-01
Cortisol diurnal variation may be abnormal among patients with endogenous Cushing syndrome (CS). The study objective was to compare the plasma cortisol AM/PM ratios between different etiologies of CS. This is a retrospective cohort study, conducted at a clinical research center. Adult patients with CS that underwent adrenalectomy or trans-sphenoidal surgery (n=105) were divided to those with a pathologically confirmed diagnosis of Cushing disease (n=21) and those with primary adrenal CS, including unilateral adrenal adenoma (n=28), adrenocortical hyperplasia (n=45), and primary pigmented nodular adrenocortical disease (PPNAD, n=11). Diurnal plasma cortisol measurements were obtained at 11:30 PM and midnight and at 7:30 and 8:00 AM. The ratios between the mean morning levels and mean late-night levels were calculated. Mean plasma cortisol AM/PM ratio was lower among CD patients compared to those with primary adrenal CS (1.4±0.6 vs. 2.3±1.5, p<0.001, respectively). An AM/PM cortisol ratio≥2.0 among patients with unsuppressed ACTH (>15 pg/ml) excludes CD with a 85.0% specificity and a negative predictive value (NPV) of 90.9%. Among patients with primary adrenal CS, an AM/PM cortisol≥1.2 had specificity and NPV of 100% for ruling out a diagnosis of PPNAD. Plasma cortisol AM/PM ratios are lower among patients with CD compared with primary adrenal CS, and may aid in the differential diagnosis of endogenous hypercortisolemia. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Alves, M.; Hanson, D. R.; Grieves, C.; Ortega, J. V.
2015-12-01
Amines and ammonia are an important group of molecules that can greatly affect atmospheric particle formation that can go on to impact cloud formation and their scattering of thermal and solar radiation, and as a result human health and ecosystems. In this study, an Ambient Pressure Mass Spectrometer (AmPMS) that is selective and sensitive to molecules with a high proton affinity, such as amines, was coupled with a newly built corona discharge ion source. AmPMS was used to monitor many different nitrogenous compound that are found in an urban atmosphere (July 2015, Minneapolis), down to the single digit pmol/mol level. Simultaneous to this, a proton transfer mass spectrometer also sampled the atmosphere through an inlet within 20 m of the AmPMS inlet. In another set of studies, a similar AmPMS was attached to a large Teflon film chamber at the Atmospheric Chemistry Division at NCAR (August 2015, Boulder). Exploratory studies are planned on the sticking of amines to the chamber walls as well as oxidizing the amine and monitoring products. Depending on the success of these studies, results will be presented on the reversability of amine partitioning and mass balance for these species in the chamber.
Tirosh, Amit; Lodish, Maya B; Lyssikatos, Charalampos; Belyavskaya, Elena; Papadakis, Georgios Z; Stratakis, Constantine A
2017-01-01
The utility of circadian cortisol variation in estimating the degree of hypercortisolemia in different forms of endogenous Cushing syndrome (CS) has not been evaluated in children yet. A retrospective cohort study, including children who underwent surgery due to CS (n = 115), was divided into children with a pituitary adenoma (Cushing disease) (n = 88), primary adrenal CS (n = 21), or ectopic adrenocorticotropin- or corticotropin-releasing hormone (ACTH-/CRH)-secreting tumors (n = 6). Circadian plasma cortisol measurements were obtained at 11: 30 p.m. and at midnight, and at 7: 30 and 8: 00 a.m. The ratios between the morning and late-night concentrations were calculated. Plasma cortisol early-morning and midnight (AM/PM) ratios negatively correlated with 24-h urinary free cortisol (UFC) collections among the full study population and in each of the individual etiologies. Plasma ACTH concentrations positively correlated with plasma cortisol AM/PM ratios among patients with ACTH-independent CS. Finally, patients with primary pigmented nodular adrenocortical disease showed no correlation between UFC collections and the plasma cortisol AM/PM ratio, in contrast with other etiologies for primary adrenal CS, which showed a strong negative correlation between them. Our study shows the association between the plasma cortisol AM/PM ratio and the degree of hypercortisolemia in children with CS. © 2017 S. Karger AG, Basel.
Mattiauda, D A; Gibb, M J; Carriquiry, M; Tamminga, S; Chilibroste, P
2018-05-07
The timing in which supplements are provided in grazing systems can affect dry matter (DM) intake and productive performance. The objective of this study was to evaluate the effect of timing of corn silage supplementation on ingestive behaviour, DM intake, milk yield and composition in grazing dairy cows. In total, 33 Holstein dairy cows in a randomized block design grazed on a second-year mixed grass-legume pasture from 0900 to 1500 h and received 2.7 kg of a commercial supplement at each milking. Paddock sizes were adjusted to provide a daily herbage allowance of 15 kg DM/cow determined at ground level. The three treatments imposed each provided 3.8 kg DM/day of corn silage offered in a single meal at 0800 h (Treatment AM), equally distributed in two meals 0800 and 1700 h (Treatment AM-PM) or a single meal at 1700 h (Treatment PM). The experiment was carried out during the late autumn and early winter period, with 1 week of adaptation and 6 weeks of measurements. There were no differences between treatments in milk yield, but 4% fat-corrected milk yield tended to be greater in AM-PM than in AM cows, which did not differ from PM (23.7, 25.3 and 24.6±0.84 kg/day for AM, AM-PM and PM, respectively). Fat percentage and yield were greater for AM-PM than for AM cows and intermediate for PM cows (3.89 v. 3.66±0.072% and 1.00 v. 0.92±0.035 kg/day, respectively). Offering corn silage in two meals had an effect on herbage DM intake which was greater for AM-PM than AM cows and was intermediate in PM cows (8.5, 11.0 and 10.3±0.68 kg/day for AM, AM-PM and PM, respectively). During the 6-h period at pasture, the overall proportion of observations on which cows were grazing tended to be different between treatments and a clear grazing pattern along the grazing session (1-h observation period) was identified. During the time at pasture, the proportion of observations during which cows ruminated was positively correlated with the DM intake of corn silage immediately before turn out to pasture. The treatment effects on herbage DM intake did not sufficiently explain differences in productive performance. This suggests that the timing of the corn silage supplementation affected rumen kinetics and likewise the appearance of hunger and satiety signals as indicated by observed changes in temporal patterns of grazing and ruminating activities.
Constant light disrupts the circadian rhythm of steroidogenic proteins in the rat adrenal gland.
Park, Shin Y; Walker, Jamie J; Johnson, Nicholas W; Zhao, Zidong; Lightman, Stafford L; Spiga, Francesca
2013-05-22
The circadian rhythm of corticosterone (CORT) secretion from the adrenal cortex is regulated by the suprachiasmatic nucleus (SCN), which is entrained to the light-dark cycle. Since the circadian CORT rhythm is associated with circadian expression of the steroidogenic acute regulatory (StAR) protein, we investigated the 24h pattern of hormonal secretion (ACTH and CORT), steroidogenic gene expression (StAR, SF-1, DAX1 and Nurr77) and the expression of genes involved in ACTH signalling (MC2R and MRAP) in rats entrained to a normal light-dark cycle. We found that circadian changes in ACTH and CORT were associated with the circadian expression of all gene targets; with SF-1, Nurr77 and MRAP peaking in the evening, and DAX1 and MC2R peaking in the morning. Since disruption of normal SCN activity by exposure to constant light abolishes the circadian rhythm of CORT in the rat, we also investigated whether the AM-PM variation of our target genes was also disrupted in rats exposed to constant light conditions for 5weeks. We found that the disruption of the AM-PM variation of ACTH and CORT secretion in rats exposed to constant light was accompanied by a loss of AM-PM variation in StAR, SF-1 and DAX1, and a reversed AM-PM variation in Nurr77, MC2R and MRAP. Our data suggest that circadian expression of StAR is regulated by the circadian expression of nuclear receptors and proteins involved in both ACTH signalling and StAR transcription. We propose that ACTH regulates the secretion of CORT via the circadian control of steroidogenic gene pathways that become dysregulated under the influence of constant light. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Effect of reproductive methods and GnRH administration on long-term protocol in Santa Ines ewes.
Biehl, Marcos V; Ferraz Junior, Marcos V C; Ferreira, Evandro M; Polizel, Daniel M; Miszura, Alexandre A; Barroso, José P R; Oliveira, Gabriela B; Bertoloni, Analisa V; Pires, Alexandre V
2017-08-01
This study aimed to determine whether reproductive performance of ewes submitted to laparoscopic timed artificial insemination (TAI) would be similar to ante meridiem (AM)/post meridiem (PM) rule and assisted natural mating (NM), and whether GnRH may enhance the pregnancy rate in TAI. In experiment I, 191 non-lactating ewes were synchronized, then TAI was performed either 48 h after progesterone (P4) removal (TAI-48 h) or 12 h after estrus detection (AM/PM); moreover, some ewes were submitted to NM (NM) as control treatment. In experiment II, 247 non-lactating ewes were allocated in five treatments, a control (no-GnRH on protocol) and four treatments arranged in a factorial design 2 × 2. The factors were time and dose of GnRH: ewes that received either 10 μg (TAI-10 μg-36 h) or 25 μg of GnRH (TAI-25 μg-36 h) 36 h after P4 removal and ewes that received either 10 μg (TAI-10 μg-48 h) or 25 μg of GnRH (TAI-25 μg-48 h) at time of insemination, 48 h after P4 removal. In experiment I, pregnancy rate in TAI-48 h was lower (P = 0.03) than AM/PM and NM. Moreover, the probability of pregnancy in TAI-48 h was higher (P = 0.06) in ewes detected in estrus early. In experiment II, the use of GnRH in TAI protocols increased (P < 0.01) pregnancy rate at synchronization, and TAI-25 μ-48 h and TAI-10 μg-36 h treatments increased (P = 0.02) pregnancy rate compered to TAI-10 μg-48 h. We conclude that TAI decreased pregnancy rate compered to NM and AM/PM, which may be improved by GnRH use in TAI to synchronize ovulation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-26
...-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Order Granting Approval of Proposed Rule Change To List and Trade CBOE S&P 500 AM/PM Basis Options July 20, 2012. I. Introduction On May 23, 2012, the Chicago Board Options Exchange, Incorporated (``Exchange'' or ``CBOE'') filed with the...
Dai, Yitang; Cen, Qizhuang; Wang, Lei; Zhou, Yue; Yin, Feifei; Dai, Jian; Li, Jianqiang; Xu, Kun
2015-12-14
Extraction of a microwave component from a low-time-jitter femtosecond pulse train has been attractive for current generation of spectrally pure microwave. In order to avoid the transfer from the optical amplitude noise to microwave phase noise (AM-PM), we propose to down-convert the target component to intermediate frequency (IF) before the opto-electronic conversion. Due to the much lower carrier frequency, the AM-PM is greatly suppressed. The target is then recovered by up-conversion with the same microwave local oscillation (LO). As long as the time delay of the second LO matches that of the IF carrier, the phase noise of the LO shows no impact on the extraction process. The residual noise of the proposed extraction is analyzed in theory, which is also experimentally demonstrated as averagely around -155 dBc/Hz under offset frequency larger than 1 kHz when 10-GHz tone is extracted from a home-made femtosecond fiber laser. Large tunable extraction from 1 GHz to 10 GHz is also reported.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
...-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing of Proposed Rule Change To List and Trade CBOE S&P 500 AM/PM Basis Options May 31, 2012. Pursuant to Section 19(b)(1) of... given that on May 23, 2012, the Chicago Board Options Exchange, Incorporated (``Exchange'' or ``CBOE...
Net field-aligned currents observed by Triad
NASA Technical Reports Server (NTRS)
Sugiura, M.; Potemra, T. A.
1975-01-01
From the Triad magnetometer observation of a step-like level shift in the east-west component of the magnetic field at 800 km altitude, the existence of a net current flowing into or away from the ionosphere in a current layer was inferred. The current direction is toward the ionosphere on the morning side and away from it on the afternoon side. The field aligned currents observed by Triad are considered as being an important element in the electro-dynamical coupling between the distant magnetosphere and the ionosphere. The current density integrated over the thickness of the layer increases with increasing magnetic activity, but the relation between the current density and Kp in individual cases is not a simple linear relation. An extrapolation of the statistical relation to Kp = 0 indicates existence of a sheet current of order 0.1 amp/m even at extremely quiet times. During periods of higher magnetic activity an integrated current of approximately 1 amp/m and average current density of order 0.000001 amp/sq m are observed. The location and the latitudinal width of the field aligned current layer carrying the net current very roughly agree with those of the region of high electron intensities in the trapping boundary.
Shiroma, Calvin Y
2016-01-01
As of August 2014, the Joint POW/MIA Accounting Command has identified the remains of 1980 previously unknown U.S. service members; 280 were from the Korean War. To determine the accuracy and completeness of the available antemortem (AM) dental records, a review of the AM/postmortem (AM/PM) dental record comparisons from 233 Forensic Odontology Reports written in support of remains identified from the Korean War was performed. Seventy-two AM/PM comparisons resulted in exact dental chartings while 161 contained discrepancies which were explainable. Explainable discrepancies include undocumented treatment (103), incorrectly charted third molars as missing (82), differing opinions of specific molars present/missing (20), and erroneous treatment documentation and/or misidentification of teeth present/missing (22, other than molars). Reassessment has revealed varying levels of completeness for our available AM dental records, the need to thoroughly review our computerized comparisons, adjust our comparisons to include molar pattern variations/third molars, and updating our database comparison program. Published 2015. This article is a U.S. Government work and is in the public domain in the U.S.A.
BPMs with Precise Alignment for TTF2
NASA Astrophysics Data System (ADS)
Noelle, D.; Priebe, G.; Wendt, M.; Werner, M.
2004-11-01
Design and technology of the new, standardized BPM-system for the warm sections of the TESLA Test Facility phase II (TTF2) are presented. Stripline- and button-BPM pickups are read-out with an upgraded version of the AM/PM BPM-electronics of TTF1. The Stripline-BPMs are fixed inside the quadrupole magnets. A stretched wire measurement was used to calibrate the electrical axis of the BPM wrt. to the magnetic axis of the quadrupole.
Resolution Studies at Beam Position Monitors at the FLASH Facility at DESY
NASA Astrophysics Data System (ADS)
Baboi, N.; Lund-Nielsen, J.; Noelle, D.; Riesch, W.; Traber, T.; Kruse, J.; Wendt, M.
2006-11-01
More than 60 beam position monitors (BPM) are installed along about 350m of beamline of the Free Electron LASer in Hamburg (FLASH) at DESY. The room-temperature part of the accelerator is equipped mainly with stripline position monitors. In the accelerating cryo-modules there are cavity and re-entrant cavity BPMs, which will not be discussed here. In the undulator part of the machine button BPMs are used. This area requires a single bunch resolution of 10μm. The electronics is based on the AM/PM normalization principle and is externally triggered. Single-bunch position is measured. This paper presents the methods used to determine the resolution of the BPMs. The results based on correlations between different BPMs along the machine are compared to noise measurements in the RF lab. The performance and difficulties with the BPM design and the current electronics as well as its development are discussed.
Fulgoni, Victor L; Dreher, Mark; Davenport, Adrienne J
2013-01-02
Avocados contain monounsaturated fatty acids (MUFA) dietary fiber, essential nutrients and phytochemicals. However, no epidemiologic data exist on their effects on diet quality, weight management and other metabolic disease risk factors. The objective of this research was to investigate the relationships between avocado consumption and overall diet quality, energy and nutrient intakes, physiological indicators of health, and risk of metabolic syndrome. Avocado consumption and nutrition data were based on 24-hour dietary recalls collected by trained NHANES interviewers using the USDA Automated Multiple Pass Method (AMPM). Physiological data were collected from physical examinations conducted in NHANES Mobile Examination Centers. Diet quality was calculated using the USDA's Healthy Eating Index-2005. Subjects included 17,567 US adults ≥ 19 years of age (49% female), including 347 avocado consumers (50% female), examined in NHANES 2001-2008. Least square means, standard errors, and ANOVA were determined using appropriate sample weights, with adjustments for age, gender, ethnicity, and other covariates depending on dependent variable of interest. Avocado consumers had significantly higher intakes of vegetables (p<0.05); fruit, diet quality, total fat, monounsaturated and polyunsaturated fats, dietary fiber, vitamins E, K, magnesium, and potassium (p<0.0001); vitamin K (p=0.0013); and lower intakes of added sugars (p<0.0001). No significant differences were seen in calorie or sodium intakes. Body weight, BMI, and waist circumference were significantly lower (p<0.01), and HDL-C was higher (p<0.01) in avocado consumers. The odds ratio for metabolic syndrome was 50% (95th CI: 0.32-0.72) lower in avocado consumers vs. non-consumers. Avocado consumption is associated with improved overall diet quality, nutrient intake, and reduced risk of metabolic syndrome. Dietitians should be aware of the beneficial associations between avocado intake, diet and health when making dietary recommendations.
2013-01-01
Background Avocados contain monounsaturated fatty acids (MUFA) dietary fiber, essential nutrients and phytochemicals. However, no epidemiologic data exist on their effects on diet quality, weight management and other metabolic disease risk factors. The objective of this research was to investigate the relationships between avocado consumption and overall diet quality, energy and nutrient intakes, physiological indicators of health, and risk of metabolic syndrome. Methods Avocado consumption and nutrition data were based on 24-hour dietary recalls collected by trained NHANES interviewers using the USDA Automated Multiple Pass Method (AMPM). Physiological data were collected from physical examinations conducted in NHANES Mobile Examination Centers. Diet quality was calculated using the USDA’s Healthy Eating Index-2005. Subjects included 17,567 US adults ≥ 19 years of age (49% female), including 347 avocado consumers (50% female), examined in NHANES 2001–2008. Least square means, standard errors, and ANOVA were determined using appropriate sample weights, with adjustments for age, gender, ethnicity, and other covariates depending on dependent variable of interest. Results Avocado consumers had significantly higher intakes of vegetables (p < 0.05); fruit, diet quality, total fat, monounsaturated and polyunsaturated fats, dietary fiber, vitamins E, K, magnesium, and potassium (p < 0.0001); vitamin K (p = 0.0013); and lower intakes of added sugars (p < 0.0001). No significant differences were seen in calorie or sodium intakes. Body weight, BMI, and waist circumference were significantly lower (p < 0.01), and HDL-C was higher (p < 0.01) in avocado consumers. The odds ratio for metabolic syndrome was 50% (95th CI: 0.32-0.72) lower in avocado consumers vs. non-consumers. Conclusions Avocado consumption is associated with improved overall diet quality, nutrient intake, and reduced risk of metabolic syndrome. Dietitians should be aware of the beneficial associations between avocado intake, diet and health when making dietary recommendations. PMID:23282226
Kang, Wanli; Wang, Pengxiang; Fan, Haiming; Yang, Hongbin; Dai, Caili; Yin, Xia; Zhao, Yilu; Guo, Shujun
2017-02-08
Responsive wormlike micelles are very useful in a number of applications, whereas it is still challenging to create dramatic viscosity changes in wormlike micellar systems. Here we developed a pH-responsive wormlike micellar system based on a noncovalent constructed surfactant, which is formed by the complexation of N-erucamidopropyl-N,N-dimethylamine (UC 22 AMPM) and citric acid at the molar ratio of 3 : 1 (EACA). The phase behavior, aggregate microstructure and viscoelasticity of EACA solutions were investigated by macroscopic appearance observation, rheological and cryo-TEM measurements. It was found that the phase behavior of EACA solutions undergoes transition from transparent viscoelastic fluids to opalescent solutions and then phase separation with white floaters upon increasing the pH. Upon increasing the pH from 2.03 to 6.17, the viscosity of wormlike micelles in the transparent solutions continuously increased and reached ∼683 000 mPa s at pH 6.17. As the pH was adjusted to 7.31, the opalescent solution shows a water-like flowing behaviour and the η 0 rapidly declines to ∼1 mPa s. Thus, dramatic viscosity changes of about 6 magnitudes can be triggered by varying the pH values without any deterioration of the EACA system. This drastic variation in rheological behavior is attributed to the pH dependent interaction between UC 22 AMPM and citric acid. Furthermore, the dependence on concentration and temperature of the rheological behavior of EACA solutions was also studied to assist in obtaining the desired pH-responsive viscosity changes.
Grubbe, R E; Lumry, W R; Anolik, R
2009-01-01
Antihistamines are first-line therapy for the treatment of seasonal allergic rhinitis (AR); however, an oral decongestant is often added to improve control of nasal congestion. To examine whether a tablet combining the nonsedating antihistamine desloratadine and the decongestant pseudoephedrine was more effective than either drug administered alone in reducing the symptoms of seasonal AR, including nasal congestion. In this multicenter, double-blind study, participants (N = 598) with symptomatic seasonal AR were administered either a combination tablet of desloratadine 2.5 mg/pseudoephedrine 120 mg (DL/PSE) bid, a desloratadine 5.0 mg qd and a placebo tablet, or pseudoephedrine 120 mg bid. Participants assessed their symptom severity twice daily over the 2-week treatment period. The primary variable to assess the effects of the antihistamine component--mean change from baseline in average AM/PM reflective total symptom score (TSS), excluding nasal congestion--was significantly greater (-6.54) for DL/PSE than for desloratadine (-5.09) or pseudoephedrine (-5.07) monotherapy (P < .001 for both). The primary variable to assess the effects of the decongestant component--mean change from baseline in average AM/PM reflective nasal congestion score--was also significantly greater (-0.93) for DL/PSE than for desloratadine (-0.66) or pseudoephedrine (-0.75) (P < .001 vs desloratadine; P = .006 vs pseudoephedrine). This study demonstrated that DL/PSE therapy was more effective in reducing symptoms of seasonal AR, including nasal congestion, than the individual components when administered alone, thus supporting use of this combination in participants with symptomatic seasonal AR and prominent nasal congestion.
Time-Dependent Traveling Wave Tube Model for Intersymbol Interference Investigations
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Downey, Alan (Technical Monitor)
2001-01-01
For the first time, a computational model has been used to provide a direct description of the effects of the traveling wave tube (TWT) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion, gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept-amplitude and/or swept-frequency data. The fully three-dimensional (3D), time-dependent, TWT interaction model using the electromagnetic code MAFIA is presented. This model is used to investigate assumptions made in TWT black-box models used in communication system level simulations. In addition, digital signal performance, including intersymbol interference (ISI), is compared using direct data input into the MAFIA model and using the system level analysis tool, SPW.
Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Downey, Alan (Technical Monitor)
2001-01-01
For the first time, a physics based computational model has been used to provide a direct description of the effects of the TWT (Traveling Wave Tube) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept amplitude and/or swept frequency data. The fully three-dimensional (3D), time-dependent, TWT interaction model using the electromagnetic code MAFIA is presented. This model is used to investigate assumptions made in TWT black box models used in communication system level simulations. In addition, digital signal performance, including intersymbol interference (ISI), is compared using direct data input into the MAFIA model and using the system level analysis tool, SPW (Signal Processing Worksystem).
Aldasouqi, Saleh A; Reed, Amy J
2014-11-01
The objective was to raise awareness about the importance of ensuring that insulin pumps internal clocks are set up correctly at all times. This is a very important safety issue because all commercially available insulin pumps are not GPS-enabled (though this is controversial), nor equipped with automatically adjusting internal clocks. Special attention is paid to how basal and bolus dose errors can be introduced by daylight savings time changes, travel across time zones, and am-pm clock errors. Correct setting of insulin pump internal clock is crucial for appropriate insulin delivery. A comprehensive literature review is provided, as are illustrative cases. Incorrect setting can potentially result in incorrect insulin delivery, with potential harmful consequences, if too much or too little insulin is delivered. Daylight saving time changes may not significantly affect basal insulin delivery, given the triviality of the time difference. However, bolus insulin doses can be dramatically affected. Such problems may occur when pump wearers have large variations in their insulin to carb ratio, especially if they forget to change their pump clock in the spring. More worrisome than daylight saving time change is the am-pm clock setting. If this setting is set up incorrectly, both basal rates and bolus doses will be affected. Appropriate insulin delivery through insulin pumps requires correct correlation between dose settings and internal clock time settings. Because insulin pumps are not GPS-enabled or automatically time-adjusting, extra caution should be practiced by patients to ensure correct time settings at all times. Clinicians and diabetes educators should verify the date/time of insulin pumps during patients' visits, and should remind their patients to always verify these settings. © 2014 Diabetes Technology Society.
Pitfalls of Insulin Pump Clocks
Reed, Amy J.
2014-01-01
The objective was to raise awareness about the importance of ensuring that insulin pumps internal clocks are set up correctly at all times. This is a very important safety issue because all commercially available insulin pumps are not GPS-enabled (though this is controversial), nor equipped with automatically adjusting internal clocks. Special attention is paid to how basal and bolus dose errors can be introduced by daylight savings time changes, travel across time zones, and am-pm clock errors. Correct setting of insulin pump internal clock is crucial for appropriate insulin delivery. A comprehensive literature review is provided, as are illustrative cases. Incorrect setting can potentially result in incorrect insulin delivery, with potential harmful consequences, if too much or too little insulin is delivered. Daylight saving time changes may not significantly affect basal insulin delivery, given the triviality of the time difference. However, bolus insulin doses can be dramatically affected. Such problems may occur when pump wearers have large variations in their insulin to carb ratio, especially if they forget to change their pump clock in the spring. More worrisome than daylight saving time change is the am-pm clock setting. If this setting is set up incorrectly, both basal rates and bolus doses will be affected. Appropriate insulin delivery through insulin pumps requires correct correlation between dose settings and internal clock time settings. Because insulin pumps are not GPS-enabled or automatically time-adjusting, extra caution should be practiced by patients to ensure correct time settings at all times. Clinicians and diabetes educators should verify the date/time of insulin pumps during patients’ visits, and should remind their patients to always verify these settings. PMID:25355713
Higher Order Modulation Intersymbol Interference Caused by Traveling-wave Tube Amplifiers
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Williams, W. D. (Technical Monitor)
2002-01-01
For the first time, a time-dependent, physics-based computational model has been used to provide a direct description of the effects of the traveling wave tube amplifier (TWTA) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves, Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry and operating characteristics of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept-amplitude and/or swept-frequency data. First, the TWT model using the three dimensional (3D) electromagnetic code MAFIA is presented. Then, this comprehensive model is used to investigate approximations made in conventional TWT black-box models used in communication system level simulations, To quantitatively demonstrate the effects these approximations have on digital signal performance predictions, including intersymbol interference (ISI), the MAFIA results are compared to the system level analysis tool, Signal Processing, Workstation (SPW), using high order modulation schemes including 16 and 64-QAM.
Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty
2002-01-01
For the first time, a time-dependent, physics-based computational model has been used to provide a direct description of the effects of the traveling wave tube amplifier (TWTA) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry and operating characteristics of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept- amplitude and/or swept-frequency data. First, the TWT model using the three dimensional (3D) electromagnetic code MAFIA is presented. Then, this comprehensive model is used to investigate approximations made in conventional TWT black-box models used in communication system level simulations. To quantitatively demonstrate the effects these approximations have on digital signal performance predictions, including intersymbol interference (ISI), the MAFIA results are compared to the system level analysis tool, Signal Processing Workstation (SPW), using high order modulation schemes including 16 and 64-QAM.
AmPMS: Detection of Ammonia and Amines in Particle Formation and Growth Experiments
NASA Astrophysics Data System (ADS)
Hanson, D. R.; McMurry, P. H.; Jiang, J.; Huey, L. G.; Tanner, D.
2010-12-01
Ammonia and amine compounds in the atmosphere can be a significant component of atmospheric aerosol. Theoretical work shows that these compounds have a potentially large affinity for the particulate phase if strong acids are present. The co-accumulation of amines/ammonia with acids on atmospheric particles can be important for growth of atmospheric particles. Also, the role of nitrogen bases in nucleation is believed to be important. While proton transfer mass spectrometry (MS) has been deployed to detect a wide variety of volatile organic compounds in the atmosphere using H3O+ as the ionizing agent, they are generally operated at reduced pressures of 0.002 to 0.01 atm, which can limit the ability to detect pptv levels of amines. Use of this technique at atmospheric pressure can increase its sensitivity, as demonstrated by the efficient detection of ammonia via proton transfer at ambient pressures and relative humidities in the lab [1]. An instrument based on this system was deployed in the field (NCCN 2009, Atlanta) and was recently connected to a chamber at the University of Minnesota where nucleation experiments involving sulfuric acid and amines were carried out. This instrument, Ambient pressure Proton transfer Mass Spectrometer (AmP-MS), combines the specificity of chemical ionization with the high sensitivity of atmospheric pressure ionization techniques. It works for species that have high proton affinities and it is relatively insensitive to highly abundant VOCs such as methanol, acetaldehyde, acetone, etc. Water-proton clusters are electrostatically drawn across a flow of analyte gas resulting in ion-molecule reaction times of ~0.5-to-1 ms, and sensitivities in the few Hz per pptv are possible. In the laboratory, ion-molecule reactions of water proton and water ammonium clusters with various amine species are facile [2] and Sunner et al. [3] showed that species with high gas-phase basicities, and thus high PAs, also react fast with highly hydrated H3O+ and NH4+ ions. Amines have large proton affinities. The basics of the AmP-MS construction and operation will be presented as well as data from its deployment in the field and from the laboratory chamber experiments. Focus will be on the veracity of the technique and on correlations of measurements with environmental conditions, particle size distributions, and sulfuric acid cluster measurements. Candidates for important roles in nucleation will be discussed. [1] Hanson, D.R., E. Kosciuch, The NH3 mass accommodation coefficient for uptake onto sulfuric acid solutions, J. Phys. Chem. A, 2003, 107, 2199-2208. [2] Viggiano, A. A., Dale, F., and Paulson, J. F.: Proton transfer reactions of H+(H2O)n=2-11 with methanol, ammonia, pyridine, acetonitrile and acetone, J. Chem. Phys., 88, 2469-2477, 1988. [3] Sunner J., G. Nicol, and P. Kebarle, Factors Determining Relative Sensitivity of Analytes in Positive Mode Atmospheric Pressure Ionization Mass Spectrometry, Anal. Chem. 1988, 60, 1300-1307.
Traveling-Wave Tube Amplifier Model to Predict High-Order Modulation Intersymbol Interference
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Williams, W. D. (Technical Monitor)
2001-01-01
Demands for increased data rates in satellite communications necessitate higher order modulation schemes, larger system bandwidth, and minimum distortion of the modulated signal as it is passed through the traveling wave tube amplifier (TWTA). One type of distortion that the TWTA contributes to is intersymbol interference (ISI), and this becomes particularly disruptive with wide-band, complex modulation schemes. It is suspected that in addition to the dispersion of the TWT, frequency dependent reflections due to mismatches within the TWT are a significant contributor to ISI. To experimentally investigate the effect of these mismatches within the physical TWT on ISI would be prohibitively expensive, as it would require manufacturing numerous amplifiers in addition to the acquisition of the required digital hardware. In an attempt to develop a more accurate model to correlate IS1 with the TWTA and the operational signal, a fully three-dimensional (3D), time-dependent, TWT interaction model has been developed using the electromagnetic particle-in-cell (PIC) code MAFIA (solution of Maxwell's equations by the Finite-Integration-Algorithm). The model includes a user defined slow-wave circuit with a spatially tapered region of loss to implement a sever, and spatially varied geometry (such as helical pitch) to implement a phase velocity taper. The model also includes user defined input/output coupling and an electron beam contained by solenoidal, electrostatic, or periodic permanent magnet (PPM) focusing allowing standard or novel TWTs to be investigated. This model comprehensively takes into account the effects of frequency dependent nonlinear distortions (MAM and AMPM); gain ripple due to frequency dependent reflections at the input/output coupling, severs, and mismatches from dynamic pitch variations; drive induced oscillations; harmonic generation; intermodulation products; and backward waves.
Sforza, Chiarella; De Menezes, Marcio; Bresciani, Elena; Cerón-Zapata, Ana M; López-Palacio, Ana M; Rodriguez-Ardila, Myriam J; Berrio-Gutiérrez, Lina M
2012-07-01
To assess a three-dimensional stereophotogrammetric method for palatal cast digitization of children with unilateral cleft lip and palate. As part of a collaboration between the University of Milan (Italy) and the University CES of Medellin (Colombia), 96 palatal cast models obtained from neonatal patients with unilateral cleft lip and palate were obtained and digitized using a three-dimensional stereophotogrammetric imaging system. Three-dimensional measurements (cleft width, depth, length) were made separately for the longer and shorter cleft segments on the digital dental cast surface between landmarks, previously marked. Seven linear measurements were computed. Systematic and random errors between operators' tracings, and accuracy on geometric objects of known size were calculated. In addition, mean measurements from three-dimensional stereophotographs were compared statistically with those from direct anthropometry. The three-dimensional method presented good accuracy error (<0.9%) on measuring geometric objects. No systematic errors between operators' measurements were found (p > .05). Statistically significant differences (p < 5%) were noted for different methods (caliper versus stereophotogrammetry) for almost all distances analyzed, with mean absolute difference values ranging between 0.22 and 3.41 mm. Therefore, rates for the technical error of measurement and relative error magnitude were scored as moderate for Ag-Am and poor for Ag-Pg and Am-Pm distances. Generally, caliper values were larger than three-dimensional stereophotogrammetric values. Three-dimensional stereophotogrammetric systems have some advantages over direct anthropometry, and therefore the method could be sufficiently precise and accurate on palatal cast digitization with unilateral cleft lip and palate. This would be useful for clinical analyses in maxillofacial, plastic, and aesthetic surgery.
1980-02-01
produced on casts from a rolling ship. De - lineation of small scale structures is limited almost entirely by instrument characteristics alone. The...1976 1E 1 14 1s isE 17 is 1B w1 a e3 eA g .K -Jv coU ,a 0 d" do ) 0) op ) 0 01 1 j j -i j j U a : o " ri i ft de daab ri 01 t ~ o mco a m al ’U 0’U0...8217 j. ij J J .J --j J -1 I -J - UP Li (i goi de I= dcC CAl CAi a) @ Lii Us ~ W. CA j 5ii I ’ S&L a NO MORE~ THAN ON’E PROFILE PER HALF DAY (AM/PM GMT3 IS
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-05-01
The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.
Backus, Sterling J [Erie, CO; Kapteyn, Henry C [Boulder, CO
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
ERIC Educational Resources Information Center
Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim
2013-01-01
The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
NASA Astrophysics Data System (ADS)
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Establishing pass/fail criteria for bronchoscopy performance.
Konge, Lars; Clementsen, Paul; Larsen, Klaus Richter; Arendrup, Henrik; Buchwald, Christian; Ringsted, Charlotte
2012-01-01
Several tools have been created to assess competence in bronchoscopy. However, educational guidelines still use an arbitrary number of performed procedures to decide when basic competency is acquired. The purpose of this study was to define pass/fail scores for two bronchoscopy assessment tools, and investigate how these scores relate to physicians' experience regarding the number of bronchoscopy procedures performed. We studied two assessment tools and used two standard setting methods to create cut scores: the contrasting-groups method and the extended Angoff method. In the first we compared bronchoscopy performance scores of 14 novices with the scores of 14 experienced consultants to find the score that best discriminated between the two groups. In the second we asked an expert group of 7 experienced bronchoscopists to judge how a borderline trainee would perform on each item of the test. Using the contrasting-groups method we found a standard that would fail all novices and pass all consultants. A clear pass related to prior experience of 75 procedures. The consequences of using the extended Angoff method were also acceptable: all trainees who had performed less than 50 bronchoscopies failed the test and all consultants passed. A clear pass related to 80 procedures. Our proposed pass/fail scores for these two methods seem appropriate in terms of consequences. Prior experience with the performance of 75 and 80 bronchoscopies, respectively, seemed to ensure basic competency. In the future objective assessment tools could become an important aid in the certification of physicians performing bronchoscopies. Copyright © 2011 S. Karger AG, Basel.
Zeng, Jinle; Chang, Baohua; Du, Dong; Wang, Li; Chang, Shuhe; Peng, Guodong; Wang, Wenzhu
2018-01-05
Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not only to perform reasonable path planning for actuators, but also to correct any deviations between the welding torch and the weld pass position in real time. However, due to the small geometrical differences between adjacent weld passes, existing weld position recognition technologies such as structured light methods are not suitable for weld position detection in MLMPW. This paper proposes a novel method for weld position detection, which fuses various kinds of information in MLMPW. First, a synchronous acquisition method is developed to obtain various kinds of visual information when directional light and structured light sources are on, respectively. Then, interferences are eliminated by fusing adjacent images. Finally, the information from directional and structured light images is fused to obtain the 3D positions of the weld passes. Experiment results show that each process can be done in 30 ms and the deviation is less than 0.6 mm. The proposed method can be used for automatic path planning and seam tracking in the robotic MLMPW process as well as electron beam freeform fabrication process.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Comparisons of two methods of harvesting biomass for energy
W.F. Watson; B.J. Stokes; I.W. Savelle
1986-01-01
Two harvesting methods for utilization of understory biomass were tested against a conventional harvesting method to determine relative costs. The conventional harvesting method tested removed all pine 6 inches diameter at breast height (DBH) and larger and hardwood sawlogs as tree length logs. The two intensive harvesting methods were a one-pass and a two-pass method...
Modification of the flow pass method as applied to problems of chemistry of planet atmospheres
NASA Technical Reports Server (NTRS)
Parshev, V. A.
1980-01-01
It was shown that the modified flow pass method possesses considerable effectiveness, both in the case when the coefficient of diffusion changes severely in the examined region and in the case when diffusion is the prevalent process, as compared with chemical reactions. The case when a regular pass proves inapplicable, or applicable in a limited interval of the decisive parameters, was examined.
The reliability of the pass/fail decision for assessments comprised of multiple components
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855
Zhao, Gai; Bian, Yang; Li, Ming
2013-12-18
To analyze the impact of passing items above the roof level in the gross motor subtest of Peabody development motor scales (PDMS-2) on its assessment results. In the subtests of PDMS-2, 124 children from 1.2 to 71 months were administered. Except for the original scoring method, a new scoring method which includes passing items above the ceiling were developed. The standard scores and quotients of the two scoring methods were compared using the independent-samples t test. Only one child could pass the items above the ceiling in the stationary subtest, 19 children in the locomotion subtest, and 17 children in the visual-motor integration subtest. When the scores of these passing items were included in the raw scores, the total raw scores got the added points of 1-12, the standard scores added 0-1 points and the motor quotients added 0-3 points. The diagnostic classification was changed only in two children. There was no significant difference between those two methods about motor quotients or standard scores in the specific subtest (P>0.05). The passing items above a ceiling of PDMS-2 isn't a rare situation. It usually takes place in the locomotion subtest and visual-motor integration subtest. Including these passing items into the scoring system will not make significant difference in the standard scores of the subtests or the developmental motor quotients (DMQ), which supports the original setting of a ceiling established by upassing 3 items in a row. However, putting the passing items above the ceiling into the raw score will improve tracking of children's developmental trajectory and intervention effects.
2001 Mars Odyssey THEMIS: Thermophysics at a New Local Time
NASA Astrophysics Data System (ADS)
Hamilton, V. E.; Christensen, P. R.
2017-12-01
During its sixth extended mission, the 2001 Mars Odyssey transitioned to a new, rarely-seen, post-sunset (morning daylight) local time designed to reduce stress on the spacecraft. Since then, Thermal Emission Imaging System (THEMIS) observations have provided an unprecedented opportunity to investigate dynamic phenomena in the atmosphere and on the surface. In this new local time ( 6:45 am/pm) orbit, Odyssey's camera is acquiring expanded diurnal thermal imaging coverage, providing insight into surface texture, layering, and ice content, as well as dynamic, temperature-dependent surface, atmospheric, and polar processes. New THEMIS observations at dawn and dusk local times are filling major gaps in current knowledge about the diurnal variation of clouds, hazes and surface frost. In this presentation, we will highlight some of these data and discuss the unique scientific results that can be obtained from Mars Odyssey THEMIS observations, including: insights into potential past and present habitability of Mars, the processes and history of climate, the nature and evolution of geologic processes, and aspects of the environment relevant to future human exploration.
Diurnal variations in optical depth at Mars
NASA Technical Reports Server (NTRS)
Colburn, D. S.; Pollack, J. B.; Haberle, R. M.
1989-01-01
Viking lander camera images of the Sun were used to compute atmospheric optical depth at two sites over a period of 1 to 1/3 martian years. The complete set of 1044 optical depth determinations is presented in graphical and tabular form. Error estimates are presented in detail. Otpical depths in the morning (AM) are generally larger than in the afternoon (PM). The AM-PM differences are ascribed to condensation of water vapor into atmospheric ice aerosols at night and their evaporation in midday. A smoothed time series of these differences shows several seasonal peaks. These are simulated using a one-dimensional radiative convective model which predicts martial atmospheric temperature profiles. A calculation combinig these profiles with water vapor measurements from the Mars Atmospheric Water Detector is used to predict when the diurnal variations of water condensation should occur. The model reproduces a majority of the observed peaks and shows the factors influencing the process. Diurnal variation of condensation is shown to peak when the latitude and season combine to warm the atmosphere to the optimum temperature, cool enough to condense vapor at night and warm enough to cause evaporation at midday.
Real-time digital signal recovery for a multi-pole low-pass transfer function system.
Lee, Jhinhwan
2017-08-01
In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.
Spectrometer capillary vessel and method of making same
Linehan, John C.; Yonker, Clement R.; Zemanian, Thomas S.; Franz, James A.
1995-01-01
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube.
21 CFR 137.190 - Cracked wheat.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the method prescribed in § 137.200(c)(2), not less than 90 percent passes through a No. 8 sieve and not more than 20 percent passes through a No. 20 sieve. The proportions of the natural constituents of... the moisture as determined by the method prescribed in “Official Methods of Analysis of the...
21 CFR 137.190 - Cracked wheat.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the method prescribed in § 137.200(c)(2), not less than 90 percent passes through a No. 8 sieve and not more than 20 percent passes through a No. 20 sieve. The proportions of the natural constituents of... the moisture as determined by the method prescribed in “Official Methods of Analysis of the...
21 CFR 137.190 - Cracked wheat.
Code of Federal Regulations, 2014 CFR
2014-04-01
... the method prescribed in § 137.200(c)(2), not less than 90 percent passes through a No. 8 sieve and not more than 20 percent passes through a No. 20 sieve. The proportions of the natural constituents of... the moisture as determined by the method prescribed in “Official Methods of Analysis of the...
Systems and methods for separating particles and/or substances from a sample fluid
Mariella, Jr., Raymond P.; Dougherty, George M.; Dzenitis, John M.; Miles, Robin R.; Clague, David S.
2016-11-01
Systems and methods for separating particles and/or toxins from a sample fluid. A method according to one embodiment comprises simultaneously passing a sample fluid and a buffer fluid through a chamber such that a fluidic interface is formed between the sample fluid and the buffer fluid as the fluids pass through the chamber, the sample fluid having particles of interest therein; applying a force to the fluids for urging the particles of interest to pass through the interface into the buffer fluid; and substantially separating the buffer fluid from the sample fluid.
The Edge Detectors Suitable for Retinal OCT Image Segmentation
Yang, Jing; Gao, Qian; Zhou, Sheng
2017-01-01
Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology. PMID:29065594
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, M.; Morimoto, M.; Shima, Y.
2012-10-15
In the GAMMA 10 tandem mirror, the typical electron density is comparable to that of the peripheral plasma of torus-type fusion devices. Therefore, an effective method to increase Thomson scattering (TS) signals is required in order to improve signal quality. In GAMMA 10, the yttrium-aluminum-garnet (YAG)-TS system comprises a laser, incident optics, light collection optics, signal detection electronics, and a data recording system. We have been developing a multi-pass TS method for a polarization-based system based on the GAMMA 10 YAG TS. To evaluate the effectiveness of the polarization-based configuration, the multi-pass system was installed in the GAMMA 10 YAG-TSmore » system, which is capable of double-pass scattering. We carried out a Rayleigh scattering experiment and applied this double-pass scattering system to the GAMMA 10 plasma. The integrated scattering signal was made about twice as large by the double-pass system.« less
Fusion method of SAR and optical images for urban object extraction
NASA Astrophysics Data System (ADS)
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
Timing and technique impact the effectiveness of road-based, mobile acoustic surveys of bats.
D'Acunto, Laura E; Pauli, Benjamin P; Moy, Mikko; Johnson, Kiara; Abu-Omar, Jasmine; Zollner, Patrick A
2018-03-01
Mobile acoustic surveys are a common method of surveying bat communities. However, there is a paucity of empirical studies exploring different methods for conducting mobile road surveys of bats. During 2013, we conducted acoustic mobile surveys on three routes in north-central Indiana, U.S.A., using (1) a standard road survey, (2) a road survey where the vehicle stopped for 1 min at every half mile of the survey route (called a "start-stop method"), and (3) a road survey with an individual using a bicycle. Linear mixed models with multiple comparison procedures revealed that when all bat passes were analyzed, using a bike to conduct mobile surveys detected significantly more bat passes per unit time compared to other methods. However, incorporating genus-level comparisons revealed no advantage to using a bike over vehicle-based methods. We also found that survey method had a significant effect when analyses were limited to those bat passes that could be identified to genus, with the start-stop method generally detecting more identifiable passes than the standard protocol or bike survey. Additionally, we found that significantly more identifiable bat passes (particularly those of the Eptesicus and Lasiurus genera) were detected in surveys conducted immediately following sunset. As governing agencies, particularly in North America, implement vehicle-based bat monitoring programs, it is important for researchers to understand how variations on protocols influence the inference that can be gained from different monitoring schemes.
Spectrometer capillary vessel and method of making same
Linehan, J.C.; Yonker, C.R.; Zemanian, T.S.; Franz, J.A.
1995-11-21
The present invention is an arrangement of a glass capillary tube for use in spectroscopy. In particular, the invention is a capillary arranged in a manner permitting a plurality or multiplicity of passes of a sample material through a spectroscopic measurement zone. In a preferred embodiment, the multi-pass capillary is insertable within a standard NMR sample tube. The present invention further includes a method of making the multi-pass capillary tube and an apparatus for spinning the tube. 13 figs.
Investigation of tidal displacements of the Earth's surface by laser ranging to GEOS-3
NASA Technical Reports Server (NTRS)
Bower, D. R.; Halpenny, J.; Paul, M. K.; Lambert, A.
1980-01-01
An analysis of laser ranging data from three stations was carried out in an attempt to measure the geometric Earth tide. Two different approaches to the problem were investigated. The dynamic method computes pass to pass apparent movements in stations height relative to short arcs fitted to several passes of data from the same station by the program GEODYNE. The quasi-geometric method reduces the dependence on unmodelled satellite dynamics to a knowledge of only the radial position of the satellite by considering two station simultaneous ranging at the precise time that the satellite passes through the plane defined by two stations and the center of mass of the Earth.
Hydrogen production by high-temperature water splitting using electron-conducting membranes
Lee, Tae H.; Wang, Shuangyan; Dorris, Stephen E.; Balachandran, Uthamalingam
2004-04-27
A device and method for separating water into hydrogen and oxygen is disclosed. A first substantially gas impervious solid electron-conducting membrane for selectively passing hydrogen is provided and spaced from a second substantially gas impervious solid electron-conducting membrane for selectively passing oxygen. When steam is passed between the two membranes at disassociation temperatures the hydrogen from the disassociation of steam selectively and continuously passes through the first membrane and oxygen selectively and continuously passes through the second membrane, thereby continuously driving the disassociation of steam producing hydrogen and oxygen.
48 CFR 2415.304 - Evaluation factors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DEVELOPMENT CONTRACTING METHODS AND CONTRACTING TYPES CONTRACTING BY NEGOTIATION Source Selection 2415.304... assigned a numerical weight (except for pass-fail factors) which shall appear in the RFP. When using LPTA, each evaluation factor is applied on a “pass-fail” basis; numerical scores are not assigned. “Pass-fail...
Efficient Single-Pass Index Construction for Text Databases.
ERIC Educational Resources Information Center
Heinz, Steffen; Zobel, Justin
2003-01-01
Discusses index construction for text collections, reviews principal approaches to inverted indexes, analyzes their theoretical cost, and presents experimental results of the use of a single-pass inversion method on Web document collections. Shows that the single-pass approach is faster and does not require the complete vocabulary of the indexed…
The reliability of the pass/fail decision for assessments comprised of multiple components.
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.
Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time
Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.
2017-12-20
In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.
Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.
In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Method of fabricating a uranium-bearing foil
Gooch, Jackie G [Seymour, TN; DeMint, Amy L [Kingston, TN
2012-04-24
Methods of fabricating a uranium-bearing foil are described. The foil may be substantially pure uranium, or may be a uranium alloy such as a uranium-molybdenum alloy. The method typically includes a series of hot rolling operations on a cast plate material to form a thin sheet. These hot rolling operations are typically performed using a process where each pass reduces the thickness of the plate by a substantially constant percentage. The sheet is typically then annealed and then cooled. The process typically concludes with a series of cold rolling passes where each pass reduces the thickness of the plate by a substantially constant thickness amount to form the foil.
LEACHING OF URANIUM ORES USING ALKALINE CARBONATES AND BICARBONATES AT ATMOSPHERIC PRESSURE
Thunaes, A.; Brown, E.A.; Rabbits, A.T.; Simard, R.; Herbst, H.J.
1961-07-18
A method of leaching uranium ores containing sulfides is described. The method consists of adding a leach solution containing alkaline carbonate and alkaline bicarbonate to the ore to form a slurry, passing the slurry through a series of agitators, passing an oxygen containing gas through the slurry in the last agitator in the series, passing the same gas enriched with carbon dioxide formed by the decomposition of bicarbonates in the slurry through the penultimate agitator and in the same manner passing the same gas increasingly enriched with carbon dioxide through the other agitators in the series. The conditions of agitation is such that the extraction of the uranium content will be substantially complete before the slurry reaches the last agitator.
Method of radiographic inspection of wooden members
NASA Technical Reports Server (NTRS)
Berry, Maggie L. (Inventor); Berry, Robert F., Jr. (Inventor)
1990-01-01
The invention is a method to be used for radiographic inspection of a wooden specimen for internal defects which includes the steps of introducing a radiopaque penetrant into any internal defects in the specimen through surface openings; passing a beam of radiation through a portion of the specimen to be inspected; and making a radiographic film image of the radiation passing through the specimen, with the radiopaque penetrant in the specimen absorbing the radiation passing through it, thereby enhancing the resulting image of the internal defects in the specimen.
METHOD OF HOT ROLLING URANIUM METAL
Kaufmann, A.R.
1959-03-10
A method is given for quickly and efficiently hot rolling uranium metal in the upper part of the alpha phase temperature region to obtain sound bars and sheets possessing a good surface finish. The uranium metal billet is heated to a temperature in the range of 1000 deg F to 1220 deg F by immersion iii a molten lead bath. The heated billet is then passed through the rolls. The temperature is restored to the desired range between successive passes through the rolls, and the rolls are turned down approximately 0.050 inch between successive passes.
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Multiple pass laser amplifier system
Brueckner, Keith A.; Jorna, Siebe; Moncur, N. Kent
1977-01-01
A laser amplification method for increasing the energy extraction efficiency from laser amplifiers while reducing the energy flux that passes through a flux limited system which includes apparatus for decomposing a linearly polarized light beam into multiple components, passing the components through an amplifier in delayed time sequence and recombining the amplified components into an in phase linearly polarized beam.
Estimating Measures of Pass-Fail Reliability from Parallel Half-Tests.
ERIC Educational Resources Information Center
Woodruff, David J.; Sawyer, Richard L.
Two methods for estimating measures of pass-fail reliability are derived, by which both theta and kappa may be estimated from a single test administration. The methods require only a single test administration and are computationally simple. Both are based on the Spearman-Brown formula for estimating stepped-up reliability. The non-distributional…
21 CFR 137.195 - Crushed wheat.
Code of Federal Regulations, 2013 CFR
2013-04-01
... prescribed in § 137.200(c)(2), 40 percent or more passes through a No. 8 sieve and less than 50 percent passes through a No. 20 sieve. The proportions of the natural constituents of such wheat, other than... the method prescribed in “Official Methods of Analysis of the Association of Official Analytical...
21 CFR 137.195 - Crushed wheat.
Code of Federal Regulations, 2011 CFR
2011-04-01
... prescribed in § 137.200(c)(2), 40 percent or more passes through a No. 8 sieve and less than 50 percent passes through a No. 20 sieve. The proportions of the natural constituents of such wheat, other than... the method prescribed in “Official Methods of Analysis of the Association of Official Analytical...
21 CFR 137.195 - Crushed wheat.
Code of Federal Regulations, 2012 CFR
2012-04-01
... prescribed in § 137.200(c)(2), 40 percent or more passes through a No. 8 sieve and less than 50 percent passes through a No. 20 sieve. The proportions of the natural constituents of such wheat, other than... the method prescribed in “Official Methods of Analysis of the Association of Official Analytical...
21 CFR 137.195 - Crushed wheat.
Code of Federal Regulations, 2014 CFR
2014-04-01
... prescribed in § 137.200(c)(2), 40 percent or more passes through a No. 8 sieve and less than 50 percent passes through a No. 20 sieve. The proportions of the natural constituents of such wheat, other than... the method prescribed in “Official Methods of Analysis of the Association of Official Analytical...
NASA Technical Reports Server (NTRS)
Pearson, Richard (Inventor); Lynch, Dana H. (Inventor); Gunter, William D. (Inventor)
1995-01-01
A method and apparatus for passing light bundles through a multiple pass sampling cell is disclosed. The multiple pass sampling cell includes a sampling chamber having first and second ends positioned along a longitudinal axis of the sampling cell. The sampling cell further includes an entrance opening, located adjacent the first end of the sampling cell at a first azimuthal angular position. The entrance opening permits a light bundle to pass into the sampling cell. The sampling cell also includes an exit opening at a second azimuthal angular position. The light exit permits a light bundle to pass out of the sampling cell after the light bundle has followed a predetermined path.
Modeling of the static recrystallization for 7055 aluminum alloy by cellular automaton
NASA Astrophysics Data System (ADS)
Zhang, Tao; Lu, Shi-hong; Zhang, Jia-bin; Li, Zheng-fang; Chen, Peng; Gong, Hai; Wu, Yun-xin
2017-09-01
In order to simulate the flow behavior and microstructure evolution during the pass interval period of the multi-pass deformation process, models of static recovery (SR) and static recrystallization (SRX) by the cellular automaton (CA) method for the 7055 aluminum alloy were established. Double-pass hot compression tests were conducted to acquire flow stress and microstructure variation during the pass interval period. With the basis of the material constants obtained from the compression tests, models of the SR, incubation period, nucleation rate and grain growth were fitted by least square method. A model of the grain topology and a statistical computation of the CA results were also introduced. The effects of the pass interval time, temperature, strain, strain rate and initial grain size on the microstructure variation for the SRX of the 7055 aluminum alloy were studied. The results show that a long pass interval time, large strain, high temperature and large strain rate are beneficial for finer grains during the pass interval period. The stable size of the static recrystallized grain is not concerned with the initial grain size, but mainly depends on the strain rate and temperature. The SRX plays a vital role in grain refinement, while the SR has no effect on the variation of microstructure morphology. Using flow stress and microstructure comparisons of the simulated and experimental CA results, the established CA models can accurately predict the flow stress and microstructure evolution during the pass interval period, and provide guidance for the selection of optimized parameters for the multi-pass deformation process.
Method and apparatus for controlling the flow rate of mercury in a flow system
Grossman, Mark W.; Speer, Richard
1991-01-01
A method for increasing the mercury flow rate to a photochemical mercury enrichment utilizing an entrainment system comprises the steps of passing a carrier gas over a pool of mercury maintained at a first temperature T1, wherein the carrier gas entrains mercury vapor; passing said mercury vapor entrained carrier gas to a second temperature zone T2 having temperature less than T1 to condense said entrained mercury vapor, thereby producing a saturated Hg condition in the carrier gas; and passing said saturated Hg carrier gas to said photochemical enrichment reactor.
Hydraulic flow visualization method and apparatus
Karidis, Peter G.
1984-01-01
An apparatus and method for visualizing liquid flow. Pulses of gas bubbles are introduced into a liquid flow stream and a strobe light is operated at a frequency related to the frequency of the gas pulses to shine on the bubbles as they pass through the liquid stream. The gas pulses pass through a probe body having a valve element, and a reciprocating valve stem passes through the probe body to operate the valve element. A stem actuating device comprises a slidable reciprocating member, operated by a crank arm. The actuated member is adjustable to adjust the amount of the valve opening during each pulse.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics.
Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K
2014-11-01
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Signal evaluations using singular value decomposition for Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.
2014-11-15
This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.
Scoring and setting pass/fail standards for an essay certification examination in nurse-midwifery.
Fullerton, J T; Greener, D L; Gross, L J
1992-03-01
Examination for certification or licensure of health professionals (credentialing) in the United States is almost exclusively of the multiple choice format. The certification examination for entry into the practice of the profession of nurse-midwifery has, however, used a modified essay format throughout its twenty-year history. The examination has recently undergone a revision in the method for score interpretation and for pass/fail decision-making. The revised method, described in this paper, has important implications for all health professional credentialing agencies which use modified essay, oral or practical methods of competency assessment. This paper describes criterion-referenced scoring, the process of constructing the essay items, the methods for assuring validity and reliability for the examination, and the manner of standard setting. In addition, two alternative methods for increasing the validity of the pass/fail decision are evaluated, and the rationale for decision-making about marginal candidates is described.
Diurnal variations in optical depth at Mars: Observations and interpretations
NASA Technical Reports Server (NTRS)
Colburn, D. S.; Pollack, J. B.; Haberle, R. M.
1988-01-01
Viking lander camera images of the Sun were used to compute atmospheric optical depth at two sites over a period of 1 to 1/3 martian years. The complete set of 1044 optical depth determinations is presented in graphical and tabular form. Error estimates are presented in detail. Optical depths in the morning (AM) are generally larger than in the afternoon (PM). The AM-PM differences are ascribed to condensation of water vapor into atmospheric ice aerosols at night and their evaporation in midday. A smoothed time series of these differences shows several seasonal peaks. These are simulated using a one-dimensional radiative convective model which predicts martial atmospheric temperature profiles. A calculation combining these profiles with water vapor measurements from the Mars Atmospheric Water Detector is used to predict when the diurnal variations of water condensation should occur. The model reproduces a majority of the observed peaks and shows the factors influencing the process. Diurnal variation of condensation is shown to peak when the latitude and season combine to warm the atmosphere to the optimum temperature, cool enough to condense vapor at night and warm enough to cause evaporation at midday.
SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Watkins, W; Kim, T
2015-06-15
Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false What methods must I use to pass requirements... and Agreements Federal Agency Regulations for Grants and Agreements NATIONAL SCIENCE FOUNDATION NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 2520.332 What...
Clustering Methods; Part IV of Scientific Report No. ISR-18, Information Storage and Retrieval...
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Two papers are included as Part Four of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Controlled Single Pass Classification Algorithm with Application to Multilevel Clustering" by D. B. Johnson and J. M. Laferente presents a single pass clustering method which compares favorably…
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What methods must I use to pass requirements... and Agreements Federal Agency Regulations for Grants and Agreements NATIONAL SCIENCE FOUNDATION NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 2520.332 What...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false What methods must I use to pass requirements... and Agreements Federal Agency Regulations for Grants and Agreements NATIONAL SCIENCE FOUNDATION NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 2520.332 What...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 2 Grants and Agreements 1 2013-01-01 2013-01-01 false What methods must I use to pass requirements... and Agreements Federal Agency Regulations for Grants and Agreements NATIONAL SCIENCE FOUNDATION NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 2520.332 What...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 2 Grants and Agreements 1 2012-01-01 2012-01-01 false What methods must I use to pass requirements... and Agreements Federal Agency Regulations for Grants and Agreements NATIONAL SCIENCE FOUNDATION NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 2520.332 What...
NASA Astrophysics Data System (ADS)
Jitsuhiro, Takatoshi; Toriyama, Tomoji; Kogure, Kiyoshi
We propose a noise suppression method based on multi-model compositions and multi-pass search. In real environments, input speech for speech recognition includes many kinds of noise signals. To obtain good recognized candidates, suppressing many kinds of noise signals at once and finding target speech is important. Before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Noise suppression is frame-synchronously performed using the multiple models selected by recognized label sequences with time alignments. We evaluated this method using the E-Nightingale task, which contains voice memoranda spoken by nurses during actual work at hospitals. The proposed method obtained higher performance than the conventional method.
Visualization of pass-by noise by means of moving frame acoustic holography.
Park, S H; Kim, Y H
2001-11-01
The noise generated by pass-by test (ISO 362) was visualized. The moving frame acoustic holography was improved to visualize the pass-by noise and predict its level. The proposed method allowed us to visualize tire and engine noise generated by pass-by test based on the following assumption; the noise can be assumed to be quasistationary. This is first because the speed change during the period of our interest is negligible and second because the frequency change of the noise is also negligible. The proposed method was verified by a controlled loud speaker experiment. Effects of running condition, e.g., accelerating according to ISO 362, cruising at constant speed, and coasting down, on the radiated noise were also visualized. The visualized results show where the tire noise is generated and how it propagates.
Method for starting operation of a resistance melter
Chapman, Christopher Charles
1977-01-01
A method for starting the operation of a resistance furnace, where heating occurs by passing a current through the charge between two furnace electrodes and the charge is a material which is essentially electrically nonconductive when in a solid physical state but which becomes more electrically conductive when in a molten physical state, by connecting electrical resistance heating wire between the furnace electrodes, placing the wire in contact with the charge material between the electrodes and passing a current through the wire to heat the wire to a temperature sufficient to melt the material between the furnace electrodes so that as the material melts, current begins to pass between the electrodes through the melted material, further heating and melting more material until all current between the electrodes passes through the charge material without the aid or presence of the resistance element.
Iterative deblending of simultaneous-source data using a coherency-pass shaping operator
NASA Astrophysics Data System (ADS)
Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang
2017-10-01
Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.
Modeling Seizure Self-Prediction: An E-Diary Study
Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.
2013-01-01
Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898
TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y; Kooy, H; Craft, D
2016-06-15
Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in themore » field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.« less
An Accurate Co-registration Method for Airborne Repeat-pass InSAR
NASA Astrophysics Data System (ADS)
Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.
2017-10-01
Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.
ERIC Educational Resources Information Center
Boursicot, Katharine A. M.; Roberts, Trudie E.; Pell, Godfrey
2006-01-01
While Objective Structured Clinical Examinations (OSCEs) have become widely used to assess clinical competence at the end of undergraduate medical courses, the method of setting the passing score varies greatly, and there is no agreed best methodology. While there is an assumption that the passing standard at graduation is the same at all medical…
Comparison of two methods of standard setting: the performance of the three-level Angoff method.
Jalili, Mohammad; Hejri, Sara M; Norcini, John J
2011-12-01
Cut-scores, reliability and validity vary among standard-setting methods. The modified Angoff method (MA) is a well-known standard-setting procedure, but the three-level Angoff approach (TLA), a recent modification, has not been extensively evaluated. This study aimed to compare standards and pass rates in an objective structured clinical examination (OSCE) obtained using two methods of standard setting with discussion and reality checking, and to assess the reliability and validity of each method. A sample of 105 medical students participated in a 14-station OSCE. Fourteen and 10 faculty members took part in the MA and TLA procedures, respectively. In the MA, judges estimated the probability that a borderline student would pass each station. In the TLA, judges estimated whether a borderline examinee would perform the task correctly or not. Having given individual ratings, judges discussed their decisions. One week after the examination, the procedure was repeated using normative data. The mean score for the total test was 54.11% (standard deviation: 8.80%). The MA cut-scores for the total test were 49.66% and 51.52% after discussion and reality checking, respectively (the consequent percentages of passing students were 65.7% and 58.1%, respectively). The TLA yielded mean pass scores of 53.92% and 63.09% after discussion and reality checking, respectively (rates of passing candidates were 44.8% and 12.4%, respectively). Compared with the TLA, the MA showed higher agreement between judges (0.94 versus 0.81) and a narrower 95% confidence interval in standards (3.22 versus 11.29). The MA seems a more credible and reliable procedure with which to set standards for an OSCE than does the TLA, especially when a reality check is applied. © Blackwell Publishing Ltd 2011.
Agnihotri, Deepak; Verma, Kesari; Tripathi, Priyanka
2016-01-01
The contiguous sequences of the terms (N-grams) in the documents are symmetrically distributed among different classes. The symmetrical distribution of the N-Grams raises uncertainty in the belongings of the N-Grams towards the class. In this paper, we focused on the selection of most discriminating N-Grams by reducing the effects of symmetrical distribution. In this context, a new text feature selection method named as the symmetrical strength of the N-Grams (SSNG) is proposed using a two pass filtering based feature selection (TPF) approach. Initially, in the first pass of the TPF, the SSNG method chooses various informative N-Grams from the entire extracted N-Grams of the corpus. Subsequently, in the second pass the well-known Chi Square (χ(2)) method is being used to select few most informative N-Grams. Further, to classify the documents the two standard classifiers Multinomial Naive Bayes and Linear Support Vector Machine have been applied on the ten standard text data sets. In most of the datasets, the experimental results state the performance and success rate of SSNG method using TPF approach is superior to the state-of-the-art methods viz. Mutual Information, Information Gain, Odds Ratio, Discriminating Feature Selection and χ(2).
USDA-ARS?s Scientific Manuscript database
Whether a required Salmonella test series is passed or failed depends not only on the presence of the bacteria, but also on the methods for taking samples, the methods for culturing samples, and the statistics associated with the sampling plan. The pass-fail probabilities of the two-class attribute...
Cornell, A.A.; Dunbar, J.V.; Ruffner, J.H.
1959-09-29
A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.
Micro-optical-mechanical system photoacoustic spectrometer
Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.
2013-01-01
All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
Pass Pricing Demonstration in Cincinnati, OH
DOT National Transportation Integrated Search
1984-11-01
This report presents an evaluation of the Cincinnati Pass Pricing Demonstration. The demonstration, implemented and operated by Queen City Metro in part through a grant from the UMTA Service and Methods Demonstration Program, began in October 1981 an...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false What methods must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 801.332 Section 801.332 Grants... NONPROCUREMENT DEBARMENT AND SUSPENSION Responsibilities of Participants Regarding Transactions § 801.332 What...
Derivative based sensitivity analysis of gamma index
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many locations when the derivatives were used as the boundary values. The proposed derivative-based method can identify a noisy curve and can prove to be a useful tool for improving the sensitivity of the gamma index. PMID:26865761
Effect of Various SPD Techniques on Structure and Superplastic Deformation of Two Phase MgLiAl Alloy
NASA Astrophysics Data System (ADS)
Dutkiewicz, Jan; Bobrowski, Piotr; Rusz, Stanislav; Hilser, Ondrej; Tański, Tomasz A.; Borek, Wojciech; Łagoda, Marek; Ostachowski, Paweł; Pałka, Paweł; Boczkal, Grzegorz; Kuc, Dariusz; Mikuszewski, Tomasz
2018-03-01
MgLiAl alloy containing 9 wt% Li and 1.5% Al composed of hexagonal α and bcc β phases was cast under protecting atmosphere and hot extruded. Various methods of severe plastic deformation were applied to study their effect on structure and grain refinement. Rods were subjected to 1-3 passes of Twist Channel Angular Pressing TCAP (with helical component), cyclic compression to total strain ɛ = 5 using MAXStrain Gleeble equipment, both performed at temperature interval 160-200 °C and, as third SPD method, KOBO type extrusion at RT. The TCAP pass resulted in grain refinement of α phase from 30 μm down to about 2 μm and that of β phase from 12 to 5 μm. Maxstrain cycling 10 × up to ɛ = 5 led to much finer grain size of 300 nm. KOBO method performed at RT caused average grain size refinement of α and β phases down to about 1 μm. Hardness of alloy decreased slightly with increasing number of TCAP passes due to increase of small void density. It was higher after MAXStrain cycling and after KOBO extrusion. TEM studies after TCAP passes showed higher dislocation density in the β region than in the α phase. Crystallographic relationship (001) α|| (110) β indicated parallel positioning of slip planes of both phases. Electron diffraction technique confirmed increase of grain misorientation with number of TCAP passes. Stress/strain curves recorded at temperature 200 °C showed superplastic forming after 1st and 3rd TCAP passes with better superplastic properties due to higher elongation with increasing number of passes. Values of strain rate sensitivity coefficient m were calculated at 0.29 after 3rd TCAP pass for strain rate range 10-5 to 5 × 10-3 s-1. Deformation by MAXStrain cycling caused much more effective grain refinement with fine microtwins in α phase. Superplastic deformation was also observed in alloy deformed by KOBO method, however the value of m = 0.21 was obtained at lower temperature of deformation equal to 160 °C and deformation rate in the range 10-5 to 5 × 10-3. Tensile samples deformed superplastically showed grain growth and void formation caused by grain boundary slip. Summarizing, all methods applied resulted in sufficient grain refinement to obtain the effect of superplastic deformation for alloys of two phase α + β structure.
Grid-based precision aim system and method for disrupting suspect objects
Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.
2014-06-10
A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.
Tojo, H; Ejiri, A; Hiratsuka, J; Yamaguchi, T; Takase, Y; Itami, K; Hatae, T
2012-02-01
This paper presents an experimental demonstration to determine electron temperature (T(e)) with unknown spectral sensitivity (transmissivity) in a Thomson scattering system. In this method, a double-pass scattering configuration is used and the scattered lights from each pass (with different scattering angles) are measured separately. T(e) can be determined from the ratio of the signal intensities without knowing a real chromatic dependence in the sensitivity. Note that the wavelength range for each spectral channel must be known. This method was applied to the TST-2 Thomson scattering system. As a result, T(e) measured from the ratio (T(e,r)) and T(e) measured from a standard method (T(e,s)) showed a good agreement with <∣T(e,r) - T(e,s)∣∕T(e,s)> = 7.3%.
Eisenlohr-Moul, Tory A.; Girdler, Susan S.; Schmalenberger, Katja M.; Dawson, Danyelle N.; Surana, Pallavi; Johnson, Jacqueline L.; Rubinow, David R.
2016-01-01
Objective Despite evidence for the validity of premenstrual dysphoric disorder (PMDD) and its recent inclusion in DSM-5, variable diagnostic practices compromise the construct validity of the diagnosis and threaten the clarity of efforts to understand and treat its underlying pathophysiology. In an effort to hasten and streamline the translation of the new DSM-5 criteria for PMDD into terms compatible with existing research practices, we present the development and initial validation of the Carolina Premenstrual Assessment Scoring System (C-PASS). The C-PASS is a standardized scoring system for making DSM-5 PMDD diagnoses using 2 or more menstrual cycles of daily symptom ratings using the Daily Record of Severity of Problems (DRSP). Method Two hundred women recruited for retrospectively-reported premenstrual emotional symptoms provided 2–4 menstrual cycles of daily symptom ratings on the DRSP. Diagnoses were made by expert clinician and the C-PASS. Results Agreement of C-PASS diagnosis with expert clinical diagnosis was excellent; overall correct classification by the C-PASS was estimated at 98%. Consistent with previous evidence, retrospective reports of premenstrual symptom increases were a poor predictor of prospective C-PASS diagnosis. Conclusions The C-PASS (available as a worksheet, Excel macro, and SAS macro) is a reliable and valid companion protocol to the DRSP that standardizes and streamlines the complex, multilevel diagnosis of DSM-5 PMDD. Consistent use of this robust diagnostic method would result in more clearly-defined, homogeneous samples of women with PMDD, thereby improving the clarity of studies seeking to characterize or treat the underlying pathophysiology of the disorder. PMID:27523500
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.
Automating the process for locating no-passing zones using georeferencing data.
DOT National Transportation Integrated Search
2012-08-01
This research created a method of using global positioning system (GPS) coordinates to identify the location of no-passing zones in two-lane highways. Analytical algorithms were developed for analyzing the availability of sight distance along the ali...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What methods must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 1326.332 Section 1326.332 Grants...-tier participants to comply with subpart C of the OMB guidance in 2 CFR Part 180, as supplemented by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What methods must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 1200.332 Section 1200.332 Grants...-tier participants to comply with subpart C of the OMB guidance in 2 CFR part 180, as supplemented by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What methods must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 1400.332 Section 1400.332 Grants...-tier participants to comply with subpart C of the OMB guidance in 2 CFR part 180. ...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What methods must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 901.332 Section 901.332 Grants... lower-tier participants to comply with subpart C of the OMB guidance in 2 CFR part 180, as supplemented...
Assessment of soil compaction properties based on surface wave techniques
NASA Astrophysics Data System (ADS)
Jihan Syamimi Jafri, Nur; Rahim, Mohd Asri Ab; Zahid, Mohd Zulham Affandi Mohd; Faizah Bawadi, Nor; Munsif Ahmad, Muhammad; Faizal Mansor, Ahmad; Omar, Wan Mohd Sabki Wan
2018-03-01
Soil compaction plays an important role in every construction activities to reduce risks of any damage. Traditionally, methods of assessing compaction include field tests and invasive penetration tests for compacted areas have great limitations, which caused time-consuming in evaluating large areas. Thus, this study proposed the possibility of using non-invasive surface wave method like Multi-channel Analysis of Surface Wave (MASW) as a useful tool for assessing soil compaction. The aim of this study was to determine the shear wave velocity profiles and field density of compacted soils under varying compaction efforts by using MASW method. Pre and post compaction of MASW survey were conducted at Pauh Campus, UniMAP after applying rolling compaction with variation of passes (2, 6 and 10). Each seismic data was recorded by GEODE seismograph. Sand replacement test was conducted for each survey line to obtain the field density data. All seismic data were processed using SeisImager/SW software. The results show the shear wave velocity profiles increase with the number of passes from 0 to 6 passes, but decrease after 10 passes. This method could attract the interest of geotechnical community, as it can be an alternative tool to the standard test for assessing of soil compaction in the field operation.
Hydrogen production by high temperature water splitting using electron conducting membranes
Balachandran, Uthamalingam; Wang, Shuangyan; Dorris, Stephen E.; Lee, Tae H.
2006-08-08
A device and method for separating water into hydrogen and oxygen is disclosed. A first substantially gas impervious solid electron-conducting membrane for selectively passing protons or hydrogen is provided and spaced from a second substantially gas impervious solid electron-conducting membrane for selectively passing oxygen. When steam is passed between the two membranes at dissociation temperatures the hydrogen from the dissociation of steam selectively and continuously passes through the first membrane and oxygen selectively and continuously passes through the second membrane, thereby continuously driving the dissociation of steam producing hydrogen and oxygen. The oxygen is thereafter reacted with methane to produce syngas which optimally may be reacted in a water gas shift reaction to produce CO2 and H2.
Expected Number of Passes in a Binary Search Scheme
ERIC Educational Resources Information Center
Tenenbein, Aaron
1974-01-01
The binary search scheme is a method of finding a particular file from a set of ordered files stored in a computer. In this article an exact expression for the expected number of passes required to find a file is derived. (Author)
Staffing Subsidies and the Quality of Care in Nursing Homes
Foster, Andrew D.; Lee, Yong Suk
2015-01-01
Concerns about the quality of state-financed nursing home care has led to the wide-scale adoption by states of pass-through subsidies, in which Medicaid reimbursement rates are directly tied to staffing expenditure. We examine the effects of Medicaid pass-through on nursing home staffing and quality of care by adapting a two-step FGLS method that addresses clustering and state-level temporal autocorrelation. We find that pass-through subsidies increases staffing by about 1% on average and 2.7% in nursing homes with a low share of Medicaid patients. Furthermore, pass-through subsidies reduce the incidences of pressure ulcer worsening by about 0.9%. PMID:25814437
How good are publicly available web services that predict bioactivity profiles for drug repurposing?
Murtazalieva, K A; Druzhilovskiy, D S; Goel, R K; Sastry, G N; Poroikov, V V
2017-10-01
Drug repurposing provides a non-laborious and less expensive way for finding new human medicines. Computational assessment of bioactivity profiles shed light on the hidden pharmacological potential of the launched drugs. Currently, several freely available computational tools are available via the Internet, which predict multitarget profiles of drug-like compounds. They are based on chemical similarity assessment (ChemProt, SuperPred, SEA, SwissTargetPrediction and TargetHunter) or machine learning methods (ChemProt and PASS). To compare their performance, this study has created two evaluation sets, consisting of (1) 50 well-known repositioned drugs and (2) 12 drugs recently patented for new indications. In the first set, sensitivity values varied from 0.64 (TarPred) to 1.00 (PASS Online) for the initial indications and from 0.64 (TarPred) to 0.98 (PASS Online) for the repurposed indications. In the second set, sensitivity values varied from 0.08 (SuperPred) to 1.00 (PASS Online) for the initial indications and from 0.00 (SuperPred) to 1.00 (PASS Online) for the repurposed indications. Thus, this analysis demonstrated that the performance of machine learning methods surpassed those of chemical similarity assessments, particularly in the case of novel repurposed indications.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Setting and validating the pass/fail score for the NBDHE.
Tsai, Tsung-Hsun; Dixon, Barbara Leatherman
2013-04-01
This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false What method must I use to pass requirements down to participants at lower tiers with whom I intend to do business? 1125.332 Section 1125.332 Grants... with subpart C of the OMB guidance in 2 CFR part 180; and (b) Include a similar term or condition in...
Standard method of test for grindability of coal by the Hardgrove-machine method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1975-01-01
A procedure is described for sampling coal, grinding in a Hardgrove grinding machine, and passing through standard sieves to determine the degree of pulverization of coals. The grindability index of the coal tested is calculated from a calibration chart prepared by plotting weight of material passing a No. 200 sieve versus the Hardgrove Grindability Index for the standard reference samples. The Hardgrove machine is shown schematically. The method for preparing and determining grindability indexes of standard reference samples is given in the appendix. (BLM)
Method of isotope separation by chemi-ionization
Wexler, Sol; Young, Charles E.
1977-05-17
A method for separating specific isotopes present in an isotopic mixture by aerodynamically accelerating a gaseous compound to form a jet of molecules, and passing the jet through a stream of electron donor atoms whereby an electron transfer takes place, thus forming negative ions of the molecules. The molecular ions are then passed through a radiofrequency quadrupole mass filter to separate the specific isotopes. This method may be used for any compounds having a sufficiently high electron affinity to permit negative ion formation, and is especially useful for the separation of plutonium and uranium isotopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaweera, Indira; Krishnan, Gopala N.; Sanjurjo, Angel
2016-04-26
The invention provides methods for preparing an asymmetric hollow fiber, the asymmetric hollow fibers prepared by such methods, and uses of the asymmetric hollow fibers. One method involves passing a polymeric solution through an outer annular orifice of a tube-in-orifice spinneret, passing a bore fluid though an inner tube of the spinneret, dropping the polymeric solution and bore fluid through an atmosphere over a dropping distance, and quenching the polymeric solution and bore fluid in a bath to form an asymmetric hollow fiber.
Blocking Mechanism Study of Self-Compacting Concrete Based on Discrete Element Method
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Zhida; Zhang, Zhihua
2017-11-01
In order to study the influence factors of blocking mechanism of Self-Compaction Concrete (SCC), Roussel’s granular blocking model was verified and extended by establishing the discrete element model of SCC. The influence of different parameters on the filling capacity and blocking mechanism of SCC were also investigated. The results showed that: it was feasible to simulate the blocking mechanism of SCC by using Discrete Element Method (DEM). The passing ability of pebble aggregate was superior to the gravel aggregate and the passing ability of hexahedron particles was bigger than tetrahedron particles, while the tetrahedron particle simulation results were closer to the actual situation. The flow of SCC as another significant factor affected the passing ability that with the flow increased, the passing ability increased. The correction coefficient λ of the steel arrangement (channel section shape) and flow rate γ in the block model were introduced that the value of λ was 0.90-0.95 and the maximum casting rate was 7.8 L/min.
Continuous-wave modulation of a femtosecond oscillator using coherent molecules.
Gold, D C; Karpel, J T; Mueller, E A; Yavuz, D D
2018-03-01
We describe a new method to broaden the frequency spectrum of a femtosecond oscillator in the continuous-wave (CW) domain. The method relies on modulating the femtosecond laser using four-wave mixing inside a Raman-based optical modulator. We prepare the modulator by placing deuterium molecules inside a high-finesse cavity and driving their fundamental vibrational transition using intense pump and Stokes lasers that are locked to the cavity modes. With the molecules prepared, any laser within the optical region of the spectrum can pass through the system and be modulated in a single pass. This constitutes a CW optical modulator at a frequency of 90 THz with a steady-state single-pass efficiency of ∼10 -6 and transient (10 μs-time-scale) single-pass efficiency of ∼10 -4 . Using our modulator, we broaden the initial Ti:sapphire spectrum centered at 800 nm and produce upshifted and downshifted sidebands centered at wavelengths of 650 nm and 1.04 μm, respectively.
The Pattern of Indoor Smoking Restriction Law Transitions, 1970–2009: Laws Are Sticky
Sanders-Jackson, Ashley; Gonzalez, Mariaelena; Zerbe, Brandon; Song, Anna V.
2013-01-01
Objectives. We examined the pattern of the passage of smoking laws across venues (government and private workplaces, restaurants, bars) and by strength (no law to 100% smoke-free). Methods. We conducted transition analyses of local and state smoking restrictions passed between 1970 and 2009, with data from the Americans for Nonsmokers’ Rights Ordinance Database. Results. Each decade, more laws were enacted, from 18 passed in the 1970s to 3172 in the first decade of this century, when 91% of existing state laws were passed. Most laws passed took states and localities from no law to some level of smoking restriction, and most new local (77%; 5148/6648) and state (73%; 115/158) laws passed in the study period did not change strength. Conclusions. Because these laws are “sticky”—once a law has passed, strength of the law and venues covered do not change often—policymakers and advocates should focus on passing strong laws the first time, rather than settling for less comprehensive laws with the hope of improving them in the future. PMID:23763408
Fabrication of seamless calandria tubes by cold pilgering route using 3-pass and 2-pass schedules
NASA Astrophysics Data System (ADS)
Saibaba, N.
2008-12-01
Calandria tube is a large diameter, extremely thin walled zirconium alloy tube which has diameter to wall thickness ratio as high as 90-95. Such tubes are conventionally produced by the 'welded route', which involves extrusion of slabs followed by a series of hot and cold rolling passes, intermediate anneals, press forming of sheets into circular shape and closing the gap by TIG welding. Though pilgering is a well established process for the fabrication of seamless tubes, production of extremely thin walled tubes offers several challenges during pilgering. Nuclear fuel complex (NFC), Hyderabad, has successfully developed a process for the production of Zircaloy-4 calandria tubes by adopting the 'seamless route' which involves hot extrusion of mother blanks followed by three-pass pilgering or two-pass pilgering schedules. This paper deals with standardization of the seamless route processes for fabrication of calandria tubes, comparison between the tubes produced by 2-pass and 3-pass pilgering schedules, role of ultrasonic test charts for control of process parameters, development of new testing methods for burst testing and other properties.
Method for the concurrent ultrasonic inspection of partially completed welds
Johnson, John A.; Larsen, Eric D.; Miller, Karen S.; Smartt, Herschel B.; McJunkin, Timothy R.
2002-01-01
A method for the concurrent ultrasonic inspection of partially completed welds is disclosed and which includes providing a pair of transducers which are individually positioned on the opposite sides of a partially completed weld to be inspected; moving the transducers along the length of and laterally inwardly and outwardly relative to the partially completed weld; pulsing the respective transducers to produce an ultrasonic signal which passes through or is reflected from the partially completed weld; receiving from the respective transducers ultrasonic signals which pass through or are reflected from the partially completed welds; and analyzing the ultrasonic signal which has passed through or is reflected from the partially completed weld to determine the presence of any weld defects.
Two-pass smoother based on the SVSF estimation strategy
NASA Astrophysics Data System (ADS)
Gadsden, S. A.; Al-Shabi, M.; Kirubarajan, T.
2015-05-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilizes a switching gain that brings an inherent amount of stability to the estimation process. In this paper, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the popular Kalman-based two-pass smoother.
Dry method for recycling iodine-loaded silver zeolite
Thomas, Thomas R.; Staples, Bruce A.; Murphy, Llewellyn P.
1978-05-09
Fission product iodine is removed from a waste gas stream and stored by passing the gas stream through a bed of silver-exchanged zeolite until the zeolite is loaded with iodine, passing dry hydrogen gas through the bed to remove the iodine and regenerate the bed, and passing the hydrogen stream containing the hydrogen iodide thus formed through a lead-exchanged zeolite which adsorbs the radioactive iodine from the gas stream and permanently storing the lead-exchanged zeolite loaded with radioactive iodine.
Ductility Improvement of an AZ61 Magnesium Alloy through Two-Pass Submerged Friction Stir Processing
Luo, Xicai; Cao, Genghua; Zhang, Wen; Qiu, Cheng; Zhang, Datong
2017-01-01
Friction stir processing (FSP) has been considered as a novel technique to refine the grain size and homogenize the microstructure of metallic materials. In this study, two-pass FSP was conducted under water to enhance the cooling rate during processing, and an AZ61 magnesium alloy with fine-grained and homogeneous microstructure was prepared through this method. Compared to the as-cast material, one-pass FSP resulted in grain refinement and the β-Mg17Al12 phase was broken into small particles. Using a smaller stirring tool and an overlapping ratio of 100%, a finer and more uniform microstructure with an average grain size of 4.6 μm was obtained through two-pass FSP. The two-pass FSP resulted in a significant improvement in elongation of 37.2% ± 4.3%, but a slight decrease in strength compared with one-pass FSP alloy. Besides the microstructure refinement, the texture evolution in the stir zone is also considered responsible for the ductility improvement. PMID:28772614
Ruiz-Espinosa, H; Amador-Espejo, G G; Barcenas-Pozos, M E; Angulo-Guerrero, J O; Garcia, H S; Welti-Chanes, J
2013-02-01
Multiple-pass ultrahigh pressure homogenization (UHPH) was used for reducing microbial population of both indigenous spoilage microflora in whole raw milk and a baroresistant pathogen (Staphylococcus aureus) inoculated in whole sterile milk to define pasteurization-like processing conditions. Response surface methodology was followed and multiple response optimization of UHPH operating pressure (OP) (100, 175, 250 MPa) and number of passes (N) (1-5) was conducted through overlaid contour plot analysis. Increasing OP and N had a significant effect (P < 0·05) on microbial reduction of both spoilage microflora and Staph. aureus in milk. Optimized UHPH processes (five 202-MPa passes; four 232-MPa passes) defined a region where a 5-log(10) reduction of total bacterial count of milk and a baroresistant pathogen are attainable, as a requisite parameter for establishing an alternative method of pasteurization. Multiple-pass UHPH optimized conditions might help in producing safe milk without the detrimental effects associated with thermal pasteurization. © 2012 The Society for Applied Microbiology.
DeWitt, Nancy T.; Flocks, James G.; Hansen, Mark; Kulp, Mark; Reynolds, B.J.
2007-01-01
The U.S. Geological Survey (USGS), in cooperation with the University of New Orleans (UNO) and the Louisiana Department of Natural Resources (LDNR), conducted a high-resolution, single-beam bathymetric survey along the Louisiana southern coastal zone from Belle Pass to Caminada Pass. The survey consisted of 483 line kilometers of data acquired in July and August of 2005. This report outlines the methodology and provides the data from the survey. Analysis of the data and comparison to a similar bathymetric survey completed in 1989 show significant loss of seafloor and shoreline retreat, which is consistent with previously published estimates of shoreline change in the study area.
Efficiently passing messages in distributed spiking neural network simulation.
Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan
2013-01-01
Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.
Conrozier, Thierry; Monet, Matthieu; Lohse, Anne; Raman, Raghu
2017-08-01
Background In the management of knee osteoarthritis (OA), patient-reported-outcomes (PROs) are being developed for relevant assessment of pain. The patient acceptable symptom state (PASS) is a relevant cutoff, which allows classifying patients as being in "an acceptable state" or not. Viscosupplementation is a therapeutic modality widely used in patients with knee OA that many patients are satisfied with despite meta-analyses give conflicting results. Objectives To compare, 6 months after knee viscosupplementation, the percentage of patients who reached the PASS threshold (PASS +) with that obtained from other PROs. Methods Data of 53 consecutive patients treated with viscosupplementation (HANOX-M-XL) and followed using a standardized procedure, were analyzed at baseline and month 6. The PROs were Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain and function, patient's global assessment of pain (PGAP), patient's self-assessment of satisfaction, PASS for WOMAC pain and PGAP. Results At baseline, WOMAC pain and PGAP (range 0-10) were 4.6 (1.1) and 6.0 (1.1). At month 6, they were 1.9 (1.2) and 3.1 (5) ( P < 0.0001). At 6 months, 83% of patients were "PASS + pain," 100% "PASS + function," 79% "PASS + PGAP," 79% were satisfied, and 73.6% experienced a ≥50% decrease in WOMAC pain. Among "PASS + pain" and "PASS + PGAP" subjects, 90% and 83.3% were satisfied with the treatment, respectively. Conclusion In daily practice, clinical response to viscosupplementation slightly varies according to PROs. "PASS + PGAP" was the most related to patient satisfaction.
Analysis method for Thomson scattering diagnostics in GAMMA 10/PDX.
Ohta, K; Yoshikawa, M; Yasuhara, R; Chikatsu, M; Shima, Y; Kohagura, J; Sakamoto, M; Nakasima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Minami, T
2016-11-01
We have developed an analysis method to improve the accuracies of electron temperature measurement by employing a fitting technique for the raw Thomson scattering (TS) signals. Least square fitting of the raw TS signals enabled reduction of the error in the electron temperature measurement. We applied the analysis method to a multi-pass (MP) TS system. Because the interval between the MPTS signals is very short, it is difficult to separately analyze each Thomson scattering signal intensity by using the raw signals. We used the fitting method to obtain the original TS scattering signals from the measured raw MPTS signals to obtain the electron temperatures in each pass.
Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making.
Shim, Youngbo; Kim, Gon-Woo
2018-03-29
In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC) method with selective decision-making. The simulation and experiment results show the validity of the proposed method.
7 CFR 1980.353 - Filing and processing applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... subject to the availability of funds. (15) A copy of a valid verification of income for each adult member... method of verifying information. Verifications must pass directly from the source of information to the Lender and shall not pass through the hands of a third party or applicant. (1) Income verification...
Silva, James Manio; Matis, Hope; Kostedt, IV, William Leonard
2014-11-18
A method for treating low barium frac water includes contacting a frac water stream with a radium selective complexing resin to produce a low radium stream, passing the low radium stream through a thermal brine concentrator to produce a concentrated brine; and passing the concentrated brine through a thermal crystallizer to yield road salt.
Effect of temporal sampling and timing for soil moisture measurements at field scale
NASA Astrophysics Data System (ADS)
Snapir, B.; Hobbs, S.
2012-04-01
Estimating soil moisture at field scale is valuable for various applications such as irrigation scheduling in cultivated watersheds, flood and drought prediction, waterborne disease spread assessment, or even determination of mobility with lightweight vehicles. Synthetic aperture radar on satellites in low Earth orbit can provide fine resolution images with a repeat time of a few days. For an Earth observing satellite, the choice of the orbit is driven in particular by the frequency of measurements required to meet a certain accuracy in retrieving the parameters of interest. For a given target, having only one image every week may not enable to capture the full dynamic range of soil moisture - soil moisture can change significantly within a day when rainfall occurs. Hence this study focuses on the effect of temporal sampling and timing of measurements in terms of error on the retrieved signal. All the analyses are based on in situ measurements of soil moisture (acquired every 30 min) from the OzNet Hydrological Monitoring Network in Australia for different fields over several years. The first study concerns sampling frequency. Measurements at different frequencies were simulated by sub-sampling the original data. Linear interpolation was used to estimate the missing intermediate values, and then this time series was compared to the original. The difference between these two signals is computed for different levels of sub-sampling. Results show that the error increases linearly when the interval is less than 1 day. For intervals longer than a day, a sinusoidal component appears on top of the linear growth due to the diurnal variation of surface soil moisture. Thus, for example, the error with measurements every 4.5 days can be slightly less than the error with measurements every 2 days. Next, for a given sampling interval, this study evaluated the effect of the time during the day at which measurements are made. Of course when measurements are very frequent the time of acquisition does not matter, but when few measurements are available (sampling interval > 1 day), the time of acquisition can be important. It is shown that with daily measurements the error can double depending on the time of acquisition. This result is very sensitive to the phase of the sinusoidal variation of soil moisture. For example, in autumn for a given field with soil moisture ranging from 7.08% to 11.44% (mean and standard deviation being respectively 8.68% and 0.74%), daily measurements at 2 pm lead to a mean error of 0.47% v/v, while daily measurements at 9 am/pm produce a mean error of 0.24% v/v. The minimum of the sinusoid occurs every afternoon around 2 pm, after interpolation, measurements acquired at this time underestimate soil moisture, whereas measurements around 9 am/pm correspond to nodes of the sinusoid, hence they represent the average soil moisture. These results concerning the frequency and the timing of measurements can potentially drive the schedule of satellite image acquisition over some fields.
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc
2010-02-01
This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.
Low-pass parabolic FFT filter for airborne and satellite lidar signal processing.
Jiao, Zhongke; Liu, Bo; Liu, Enhai; Yue, Yongjian
2015-10-14
In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing.
U-Groove aluminum weld strength improvement
NASA Technical Reports Server (NTRS)
Verderaime, V.; Vaughan, R.
1996-01-01
Though butt-welds are among the most preferred joining methods in aerostructures, their strength dependence on inelastic mechanics is generally the least understood. This study investigated experimental strain distributions across a thick aluminum U-grooved weld and identified two weld process considerations for improving the multipass weld strength. The extreme thermal expansion and contraction gradient of the fusion heat input across the groove tab thickness produces severe peaking, which induces bending under uniaxial loading. The filler strain-hardening decreased with increasing filler pass sequence, producing the weakest welds on the last pass side. Current welding schedules unknowingly compound these effects which reduce the weld strength. A depeaking index model was developed to select filler pass thicknesses, pass numbers, and sequences to improve depeaking in the welding process. The intent is to combine the strongest weld pass side with the peaking induced bending tension to provide a more uniform stress and stronger weld under axial tensile loading.
NASA Astrophysics Data System (ADS)
Zhang, Gaohui; Zhao, Guozhong; Zhang, Shengbo
2012-12-01
The terahertz transmission characteristics of bilayer metallic meshes are studied based on the finite difference time domain method. The bilayer well-shaped grid, the array of complementary square metallic pill and the cross wire-hole array were investigated. The results show that the bilayer well-shaped grid achieves a high-pass of filter function, while the bilayer array of complementary square metallic pill achieves a low-pass of filter function, the bilayer cross wire-hole array achieves a band-pass of filter function. Between two metallic microstructures, the medium need to be deposited. Obviously, medium thicknesses have an influence on the terahertz transmission characteristics of metallic microstructures. Simulation results show that with increasing the thicknesses of the medium the cut-off frequency of high-pass filter and low-pass filter move to low frequency. But the bilayer cross wire-hole array possesses two transmission peaks which display competition effect.
High peak-power kilohertz laser system employing single-stage multi-pass amplification
Shan, Bing; Wang, Chun; Chang, Zenghu
2006-05-23
The present invention describes a technique for achieving high peak power output in a laser employing single-stage, multi-pass amplification. High gain is achieved by employing a very small "seed" beam diameter in gain medium, and maintaining the small beam diameter for multiple high-gain pre-amplification passes through a pumped gain medium, then leading the beam out of the amplifier cavity, changing the beam diameter and sending it back to the amplifier cavity for additional, high-power amplification passes through the gain medium. In these power amplification passes, the beam diameter in gain medium is increased and carefully matched to the pump laser's beam diameter for high efficiency extraction of energy from the pumped gain medium. A method of "grooming" the beam by means of a far-field spatial filter in the process of changing the beam size within the single-stage amplifier is also described.
U-Groove Aluminum Weld Strength Improvement
NASA Technical Reports Server (NTRS)
Verderaime, V.; Vaughan, R.
1997-01-01
Though butt-welds are among the most preferred joining methods in aerostructures, their strength dependence on inelastic mechanics is generally the least understood. This study investigated experimental strain distributions across a thick aluminum U-grooved weld and identified two weld process considerations for improving the multipass weld strength. One is the source of peaking in which the extreme thermal expansion and contraction gradient of the fusion heat input across the groove tab thickness produces severe angular distortion that induces bending under uniaxial loading. The other is the filler strain hardening decreasing with increasing filler pass sequences, producing the weakest welds on the last weld pass side. Both phenomena are governed by weld pass sequences. Many industrial welding schedules unknowingly compound these effects, which reduce the weld strength. A depeaking index model was developed to select filler pass thickness, pass numbers, and sequences to improve depeaking in the welding process. The result was to select the number and sequence of weld passes to reverse the peaking angle such as to combine the strongest weld pass side with the peaking induced bending tension component side to provide a more uniform stress and stronger weld under axial tensile loading.
Do Medicaid Wage Pass-through Payments Increase Nursing Home Staffing?
Feng, Zhanlian; Lee, Yong Suk; Kuo, Sylvia; Intrator, Orna; Foster, Andrew; Mor, Vincent
2010-01-01
Objective To assess the impact of state Medicaid wage pass-through policy on direct-care staffing levels in U.S. nursing homes. Data Sources Online Survey Certification and Reporting (OSCAR) data, and state Medicaid nursing home reimbursement policies over the period 1996–2004. Study Design A fixed-effects panel model with two-step feasible-generalized least squares estimates is used to examine the effect of pass-through adoption on direct-care staff hours per resident day (HPRD) in nursing homes. Data Collection/Extraction Methods A panel data file tracking annual OSCAR surveys per facility over the study period is linked with annual information on state Medicaid wage pass-through and related policies. Principal Findings Among the states introducing wage pass-through over the study period, the policy is associated with between 3.0 and 4.0 percent net increases in certified nurse aide (CNA) HPRD in the years following adoption. No discernable pass-through effect is observed on either registered nurse or licensed practical nurse HPRD. Conclusions State Medicaid wage pass-through programs offer a potentially effective policy tool to boost direct-care CNA staffing in nursing homes, at least in the short term. PMID:20403054
Nondestructive acoustic electric field probe apparatus and method
Migliori, Albert
1982-01-01
The disclosure relates to a nondestructive acoustic electric field probe and its method of use. A source of acoustic pulses of arbitrary but selected shape is placed in an oil bath along with material to be tested across which a voltage is disposed and means for receiving acoustic pulses after they have passed through the material. The received pulses are compared with voltage changes across the material occurring while acoustic pulses pass through it and analysis is made thereof to determine preselected characteristics of the material.
Particle measurement systems and methods
Steele, Paul T [Livermore, CA
2011-10-04
A system according to one embodiment includes a light source for generating light fringes; a sampling mechanism for directing a particle through the light fringes; and at least one light detector for detecting light scattered by the particle as the particle passes through the light fringes. A method according to one embodiment includes generating light fringes using a light source; directing a particle through the light fringes; and detecting light scattered by the particle as the particle passes through the light fringes using at least one light detector.
Method and apparatus for measuring birefringent particles
Bishop, James K.; Guay, Christopher K.
2006-04-18
A method and apparatus for measuring birefringent particles is provided comprising a source lamp, a grating, a first polarizer having a first transmission axis, a sample cell and a second polarizer having a second polarization axis. The second polarizer has a second polarization axis that is set to be perpendicular to the first polarization axis, and thereby blocks linearly polarized light with the orientation of the beam of light passing through the first polarizer. The beam of light passing through the second polarizer is measured using a detector.
REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, M. F.; Melatos, A.; Delaigle, A.
2013-04-01
We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less
ERIC Educational Resources Information Center
Goldstein, Jeren; Walford, Sylvia
This teacher's guide and student workbook are part of a series of supplementary curriculum packages presenting alternative methods and activities designed to meet the needs of Florida secondary students with mild disabilities or other special learning needs. The Life Management Skills PASS (Parallel Alternative Strategies for Students) teacher's…
A Comparison of Decision-Making Methods for Criterion-Referenced Tests.
ERIC Educational Resources Information Center
Haladyna, Tom; Roid, Gale
The problems associated with misclassifying students when pass-fail decisions are based on test scores are discussed. One protection against misclassification is to set a confidence interval around the cutting score. Those whose scores fall above the interval are passed; those whose scores fall below the interval are failed; and those whose scores…
Congruence of Standard Setting Methods for a Nursing Certification Examination.
ERIC Educational Resources Information Center
Fabrey, Lawrence J.; Raymond, Mark R.
The American Nurses' Association certification provides professional recognition beyond licensure to nurses who pass an examination. To determine the passing score as it would be set by a representative peer group, a survey was mailed to a random sample of 200 recently certified nurses. Three questions were asked: (1) what percentage of examinees…
Pappachan, Bobby K; Caesarendra, Wahyu; Tjahjowidodo, Tegoeh; Wijaya, Tomi
2017-01-01
Process monitoring using indirect methods relies on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods are used to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40–51.2 kHz was calculated and the corelation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welch’s estimate method. A comparison between Welch’s estimate method and statistical methods is also discussed. A clear co-relation was observed using Welch’s estimate to classify the number of cycles/passes. The paper also succeeds in classifying vibration signal generated by the spindle from the vibration signal acquired during finishing process. PMID:28556809
Filter for on-line air monitor unaffected by radon progeny and method of using same
Phillips, Terrance D.; Edwards, Howard D.
1999-01-01
An apparatus for testing air having contaminants and radon progeny therein. The apparatus includes a sampling box having an inlet for receiving the air and an outlet for discharging the air. The sampling box includes a filter made of a plate of sintered stainless steel. The filter traps the contaminants, yet allows at least a portion of the radon progeny to pass therethrough. A method of testing air having contaminants and radon progeny therein. The method includes providing a testing apparatus that has a sampling box with an inlet for receiving the air and an outlet for discharging the air, and has a sintered stainless steel filter disposed within said sampling box; drawing air from a source into the sampling box using a vacuum pump; passing the air through the filter; monitoring the contaminants trapped by the filter; and providing an alarm when a selected level of contaminants is reached. The filter traps the contaminants, yet allows at least a portion of the radon progeny to pass therethrough.
Monitoring and analyzing waste glass compositions
Schumacher, R.F.
1994-03-01
A device and method are described for determining the viscosity of a fluid, preferably molten glass. The apparatus and method use the velocity of rising bubbles, preferably helium bubbles, within the molten glass to determine the viscosity of the molten glass. The bubbles are released from a tube positioned below the surface of the molten glass so that the bubbles pass successively between two sets of electrodes, one above the other, that are continuously monitoring the conductivity of the molten glass. The measured conductivity will change as a bubble passes between the electrodes enabling an accurate determination of when a bubble has passed between the electrodes. The velocity of rising bubbles can be determined from the time interval between a change in conductivity of the first electrode pair and the second, upper electrode pair. The velocity of the rise of the bubbles in the glass melt is used in conjunction with other physical characteristics, obtained by known methods, to determine the viscosity of the glass melt fluid and, hence, glass quality. 2 figures.
Laser removal of graffiti from Pink Morelia Quarry
NASA Astrophysics Data System (ADS)
Penide, J.; Quintero, F.; Riveiro, A.; Sánchez-Castillo, A.; Comesaña, R.; del Val, J.; Lusquiños, F.; Pou, J.
2013-11-01
Morelia is an important city sited in Mexico. Its historical center reflects most of their culture and history, especially of the colonial period; in fact, it was appointed World Heritage Site by UNESCO. Sadly, there is a serious problem with graffiti in Morelia and its historical center is the worst affected since its delicate charming is definitely damaged. Hitherto, the conventional methods employed to remove graffiti from Pink Morelia Quarry (the most used building stone in Morelia) are quite aggressive to the appearance of the monuments, so actually, they are not a very good solution. In this work, we performed a study on the removal of graffiti from Pink Morelia Quarry by high power diode laser. We carried out an extensive experimental study looking for the optimal processing parameters, and compared a single-pass with a multi-pass method. Indeed, we achieved an effective cleaning without producing serious side effects in the stone. In conclusion, the multi-pass method emitting in continuous wave was revealed as the more effective operating modes to remove the graffiti.
Monitoring and analyzing waste glass compositions
Schumacher, Ray F.
1994-01-01
A device and method for determining the viscosity of a fluid, preferably molten glass. The apparatus and method uses the velocity of rising bubbles, preferably helium bubbles, within the molten glass to determine the viscosity of the molten glass. The bubbles are released from a tube positioned below the surface of the molten glass so that the bubbles pass successively between two sets of electrodes, one above the other, that are continuously monitoring the conductivity of the molten glass. The measured conductivity will change as a bubble passes between the electrodes enabling an accurate determination of when a bubble has passed between the electrodes. The velocity of rising bubbles can be determined from the time interval between a change in conductivity of the first electrode pair and the second, upper electrode pair. The velocity of the rise of the bubbles in the glass melt is used in conjunction with other physical characteristics, obtained by known methods, to determine the viscosity of the glass melt fluid and, hence, glass quality.
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min
2013-11-01
The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.
"Which pass is better?" Novel approaches to assess passing effectiveness in elite soccer.
Rein, Robert; Raabe, Dominik; Memmert, Daniel
2017-10-01
Passing behaviour is a key property of successful performance in team sports. Previous investigations however have mainly focused on notational measurements like total passing frequencies which provide little information about what actually constitutes successful passing behaviour. Consequently, this has hampered the transfer of research findings into applied settings. Here we present two novel approaches to assess passing effectiveness in elite soccer by evaluating their effects on majority situations and space control in front of the goal. Majority situations are assessed by calculating the number of defenders between the ball carrier and the goal. Control of space is estimated using Voronoi-diagrams based on the player's positions on the pitch. Both methods were applied to position data from 103 German First division games from the 2011/2012, 2012/2013 and 2014/2015 seasons using a big data approach. The results show that both measures are significantly related to successful game play with respect to the number of goals scored and to the probability of winning a game. The results further show that on average passes from the mid-field into the attacking area are most effective. The presented passing efficiency measures thereby offer new opportunities for future applications in soccer and other sports disciplines whilst maintaining practical relevance with respect to tactical training regimes or game performances analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Investigation of the effects of sleeper-passing impacts on the high-speed train
NASA Astrophysics Data System (ADS)
Wu, Xingwen; Cai, Wubin; Chi, Maoru; Wei, Lai; Shi, Huailong; Zhu, Minhao
2015-12-01
The sleeper-passing impact has always been considered negligible in normal conditions, while the experimental data obtained from a High-speed train in a cold weather expressed significant sleeper-passing impacts on the axle box, bogie frame and car body. Therefore, in this study, a vertical coupled vehicle/track dynamic model was developed to investigate the sleeper-passing impacts and its effects on the dynamic performance of the high-speed train. In the model, the dynamic model of vehicle is established with 10 degrees of freedom. The track model is formulated with two rails supported on the discrete supports through the finite element method. The contact forces between the wheel and rail are estimated using the non-linear Hertz contact theory. The parametric studies are conducted to analyse effects of both the vehicle speeds and the discrete support stiffness on the sleeper-passing impacts. The results show that the sleeper-passing impacts become extremely significant with the increased support stiffness of track, especially when the frequencies of sleeper-passing impacts approach to the resonance frequencies of wheel/track system. The damping of primary suspension can effectively lower the magnitude of impacts in the resonance speed ranges, but has little effect on other speed ranges. Finally, a more comprehensively coupled vehicle/track dynamic model integrating with a flexible wheel set is developed to discuss the sleeper-passing-induced flexible vibration of wheel set.
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Construction of mathematical model for measuring material concentration by colorimetric method
NASA Astrophysics Data System (ADS)
Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua
2018-06-01
This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akkus, Harun, E-mail: physicisthakkus@gmail.com
2013-12-15
We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat’s principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem. Highlights: •A different and simpler method for the calculation of deflection angle of light. •Not a curved space, only 2-D Euclidean space. •Getting a varying refractive index from the Buckingham pi-theorem. •Obtaining the some results of general relativity from Fermat’s principle.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
ERIC Educational Resources Information Center
Shulruf, Boaz; Jones, Phil; Turner, Rolf
2015-01-01
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Secret Keepers: Children's Theory of Mind and Their Conception of Secrecy
ERIC Educational Resources Information Center
Colwell, Malinda J.; Corson, Kimberly; Sastry, Anuradha; Wright, Holly
2016-01-01
In this mixed methods study, semi-structured interviews were conducted with 3-5-year-olds (n?=?21) in a university-sponsored preschool programme and children completed a theory of mind (ToM) task. After grouping children into pass/no pass groups for the ToM tasks, analyses using interpretive phenomenology indicated that preschool children explain…
Alternate biomass harvesting systems using conventional equipment
Bryce J. Stokes; William F. Watson; I. Winston Savelle
1985-01-01
Three harvesting methods were field tested in two stand types. Costs and stand utilization rates were developed for a conventional harvesting system, without energy wood recovery; a two-pass roundwood and energy wood system; and a one-pass system that harvests roundwood and energy wood. The systems harvested 20-acre test blocks in two pine pulpwood plantations and in a...
A Bayesian Method for Evaluating Passing Scores: The PPoP Curve
ERIC Educational Resources Information Center
Wainer, Howard; Wang, X. A.; Skorupski, William P.; Bradlow, Eric T.
2005-01-01
In this note, we demonstrate an interesting use of the posterior distributions (and corresponding posterior samples of proficiency) that are yielded by fitting a fully Bayesian test scoring model to a complex assessment. Specifically, we examine the efficacy of the test in combination with the specific passing score that was chosen through expert…
Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.
Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia
2011-04-11
An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America
Hydroxyl Tagging Velocimetry in Cavity-Piloted Mach 2 Combustor (Postprint)
2006-01-01
combustor with a wall cavity flameholder. In the HTV method, ArF excimer laser (193 nm) beams pass through a humid gas flow and dissociate H2O into H...grid of OH tracked by planar laser -induced fluorescence to yield about 120 velocity vectors of the two-dimensional flow over a fixed time delay...with a wall cavity flameholder. In the HTV method, ArF excimer laser (193 nm) beams pass through a humid gas flow and dissociate H2O into H + OH to
[Discussion to the advanced application of scripting in RayStation TPS system].
Zhang, Jianying; Sun, Jing; Wang, Yun
2014-11-01
In this study, the implementation methods for the several functions are explored on RayStation 4.0 Platform. Those functions are passing the information such as ROI names to a plan prescription Word file. passing the file to RayStation for plan evaluation; passing the evaluation result to form an evaluated report file. The result shows the RayStation scripts can exchange data with Word, as well as control the running of Word and the content of a Word file. Consequently, it's feasible for scripts to inactive with third party softwares upgrade the performance of RayStation itself.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10 × 10 cm2 fields (over 64% passed). With the criterion relaxed to 5%∕2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5 × 2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
SU-F-T-264: VMAT QA with 2D Radiation Measuring Equipment Attached to Gantry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fung, A
2016-06-15
Purpose: To introduce a method of VMAT QA by 2D measuring device. The 2D device is attached to the gantry throughout measurement duration. This eliminates error caused by the angular dependence of the radiation detectors. Methods: A 2D radiation measuring device was attached to the gantry of linear accelerator. The center of the detector plane was at the isocenter. For each patient plan, two verification plans were created for QA purpose. One was like an ordinary VMAT plan, to be used for radiation delivery. The other is a plan with gantry angle fixed at zero, so the dose distribution asmore » seen by the rotating 2D device. Points above 10% dose threshold were analyzed. Data is in tolerance if it fits within the 3 mm or 3% dose gamma criteria. For each patient, the plan was passed when 95% of all the points in the 2D matrix fit the gamma criteria. The following statistics were calculated: number of patient plans passed, percentage of all points passed, average percentage difference of all points. Results: VMAT QA was performed for patients during one year in our department, and the results were analyzed. All irradiation was with 6 MV photon beam. Each plan has calculated and measured doses compared. After collecting one year’s result, with 81 patient plans analyzed, all (100%) of the plans passed the gamma criteria. Of the points analyzed from all plans, 98.8% of all points passed. Conclusion: This method of attaching a 2D measuring device on the linac gantry proves to be an accurate way for VMAT QA. It is simple to use and low cost, and it eliminates the problem of directional dependence.« less
NASA Astrophysics Data System (ADS)
Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo
This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.
Reflex ring laser amplifier system
Summers, M.A.
1983-08-31
The invention is a method and apparatus for providing a reflex ring laser system for amplifying an input laser pulse. The invention is particularly useful in laser fusion experiments where efficient production of high-energy and high power laser pulses is required. The invention comprises a large aperture laser amplifier in an unstable ring resonator which includes a combination spatial filter and beam expander having a magnification greater than unity. An input pulse is injected into the resonator, e.g., through an aperture in an input mirror. The injected pulse passes through the amplifier and spatial filter/expander components on each pass around the ring. The unstable resonator is designed to permit only a predetermined number of passes before the amplified pulse exits the resonator. On the first pass through the amplifier, the beam fills only a small central region of the gain medium. On each successive pass, the beam has been expanded to fill the next concentric non-overlapping region of the gain medium.
Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.
2014-01-01
Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528
Production of latex agglutination reagents for pneumococcal serotyping
2013-01-01
Background The current ‘gold standard’ for serotyping pneumococci is the Quellung test. This technique is laborious and requires a certain level of training to correctly perform. Commercial pneumococcal latex agglutination serotyping reagents are available, but these are expensive. In-house production of latex agglutination reagents can be a cost-effective alternative to using commercially available reagents. This paper describes a method for the production and quality control (QC) of latex reagents, including problem solving recommendations, for pneumococcal serotyping. Results Here we describe a method for the production of latex agglutination reagents based on the passive adsorption of antibodies to latex particles. Sixty-five latex agglutination reagents were made using the PneuCarriage Project (PCP) method, of which 35 passed QC. The other 30 reagents failed QC due to auto-agglutination (n=2), no reactivity with target serotypes (n=8) or cross-reactivity with non-target serotypes (n=20). Dilution of antisera resulted in a further 27 reagents passing QC. The remaining three reagents passed QC when prepared without centrifugation and wash steps. Protein estimates indicated that latex reagents that failed QC when prepared using the PCP method passed when made with antiserum containing ≤ 500 μg/ml of protein. Sixty-one nasopharyngeal isolates were serotyped with our in-house latex agglutination reagents, with the results showing complete concordance with the Quellung reaction. Conclusions The method described here to produce latex agglutination reagents allows simple and efficient serotyping of pneumococci and may be applicable to latex agglutination reagents for typing or identification of other microorganisms. We recommend diluting antisera or removing centrifugation and wash steps for any latex reagents that fail QC. Our latex reagents are cost-effective, technically undemanding to prepare and remain stable for long periods of time, making them ideal for use in low-income countries. PMID:23379961
Chen, Wentao; Zhang, Weidong
2009-10-01
In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.
Initiation devices, initiation systems including initiation devices and related methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki
Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materialsmore » include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.« less
On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.
Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael
2015-01-01
Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.
Method for the abatement of hydrogen chloride
Winston, S.J.; Thomas, T.R.
1975-11-14
A method is described for reducing the amount of hydrogen chloride contained in a gas stream by reacting the hydrogen chloride with ammonia in the gas phase so as to produce ammonium chloride. The combined gas stream is passed into a condensation and collection vessel, and a cyclonic flow is created in the combined gas stream as it passes through the vessel. The temperature of the gas stream is reduced in the vessel to below the condensation temperature of ammonium chloride in order to crystallize the ammonium chloride on the walls of the vessel. The cyclonic flow creates a turbulence which breaks off the larger particles of ammonium chloride which are, in turn, driven to the bottom of the vessel where the solid ammonium chloride can be removed from the vessel. The gas stream exiting from the condensation and collection vessel is further cleaned and additional ammonium chloride is removed by passing through additional filters.
Method for the abatement of hydrogen chloride
Winston, Steven J.; Thomas, Thomas R.
1977-01-01
The present invention provides a method for reducing the amount of hydrogen chloride contained in a gas stream by reacting the hydrogen chloride with ammonia in the gas phase so as to produce ammonium chloride. The combined gas stream is passed into a condensation and collection vessel and a cyclonic flow is created in the combined gas stream as it passes through the vessel. The temperature of the gas stream is reduced in the vessel to below the condensation temperature of ammonium chloride in order to crystallize the ammonium chloride on the walls of the vessel. The cyclonic flow creates a turbulence which breaks off the larger particles of ammonium chloride which are, in turn, driven to the bottom of the vessel where the solid ammonium chloride can be removed from the vessel. The gas stream exiting from the condensation and collection vessel is further cleaned and additional ammonium chloride is removed by passing through additional filters.
Zhang, Ao; Liu, Tingting; Zheng, Kaiyuan; Liu, Ningbo; Huang, Fei; Li, Weidong; Liu, Tong; Fu, Weihua
2017-01-01
Abstract Laparoscopic colorectal surgery had been widely used for colorectal cancer patient and showed a favorable outcome on the postoperative morbidity rate. We attempted to evaluate physiological status of patients by mean of Estimation of physiologic ability and surgical stress (E-PASS) system and to analyze the difference variation of postoperative morbidity rate of open and laparoscopic colorectal cancer surgery in patients with different physiological status. In total 550 colorectal cancer patients who underwent surgery treatment were included. E-PASS and some conventional scoring systems were reviewed to examine their mortality prediction ability. The preoperative risk score (PRS) in the E-PASS system was used to evaluate the physiological status of patients. The difference of postoperative morbidity rate between open and laparoscopic colorectal cancer surgeries was analyzed respectively in patients with different physiological status. E-PASS had better prediction ability than other conventional scoring systems in colorectal cancer surgeries. Postoperative morbidities were developed in 143 patients. The parameters in the E-PASS system had positive correlations with postoperative morbidity. The overall postoperative morbidity rate of laparoscopic surgeries was lower than open surgeries (19.61% and 28.46%), but the postoperative morbidity rate of laparoscopic surgeries increased more significantly than in open surgery as PRS increased. When PRS was more than 0.7, the postoperative morbidity rate of laparoscopic surgeries would exceed the postoperative morbidity rate of open surgeries. The E-PASS system was capable to evaluate the physiological and surgical risk of colorectal cancer surgery. PRS could assist preoperative decision-making on the surgical method. Colorectal cancer patients who were assessed with a low physiological risk by PRS would be safe to undergo laparoscopic surgery. On the contrary, surgeons should make decisions prudently on the operation method for patient with a high physiological risk. PMID:28816959
ERIC Educational Resources Information Center
Shulruf, Boaz; Booth, Roger; Baker, Heather; Bagg, Warwick; Barrow, Mark
2017-01-01
Decisions about progress through an academic programme are made by Boards of Examiners, on the basis of students' course assessments. For most students such pass/fail grading decisions are straightforward. However, for those students whose results are borderline (either at a pass/fail boundary or boundaries between grades) the exercise of some…
The Effects of Recorded Lectures on Passing Rates in Online Math Courses
ERIC Educational Resources Information Center
Fital-Akelbek, Sandra; Akelbek, Mahmud
2018-01-01
In this mixed method study we investigate the impact of recorded lectures on passing rates in an online math course. For three years, we collected data from approximately 380 students enrolled in a first-year undergraduate online course, College Algebra. The data was used to compare the amount of time students spent watching recorded lectures and…
NASA Astrophysics Data System (ADS)
Wayand, N. E.; Stimberis, J.; Zagrodnik, J.; Mass, C.; Lundquist, J. D.
2016-12-01
Low-level cold air from eastern Washington state often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet, these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. The skill of surface-based methods was greatly improved by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill over both parent models. These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.
Digital carrier demodulator employing components working beyond normal limits
NASA Technical Reports Server (NTRS)
Hurd, William J. (Inventor); Sadr, Ramin (Inventor)
1990-01-01
In a digital device, having an input comprised of a digital sample stream at a frequency F, a method is disclosed for employing a component designed to work at a frequency less than F. The method, in general, is comprised of the following steps: dividing the digital sample stream into odd and even digital samples streams each at a frequency of F/2; passing one of the digital sample streams through the component designed to work at a frequency less than F where the component responds only to the odd or even digital samples in one of the digital sample streams; delaying the other digital sample streams for the time it takes the digital sample stream to pass through the component; and adding the one digital sample stream after passing through the component with the other delayed digital sample streams. In the specific example, the component is a finite impulse response filter of the order ((N + 1)/2) and the delaying step comprised passing the other digital sample streams through a shift register for a time (in sampling periods) of ((N + 1)/2) + r, where r is a pipline delay through the finite impulse response filter.
NASA Astrophysics Data System (ADS)
Noufal, Manthala Padannayil; Abdullah, Kallikuzhiyil Kochunny; Niyas, Puzhakkal; Subha, Pallimanhayil Abdul Raheem
2017-12-01
Aim: This study evaluates the impacts of using different evaluation criteria on gamma pass rates in two commercially available QA methods employed for the verification of VMAT plans using different hypothetical planning target volumes (PTVs) and anatomical regions. Introduction: Volumetric modulated arc therapy (VMAT) is a widely accepted technique to deliver highly conformal treatment in a very efficient manner. As their level of complexity is high in comparison to intensity-modulated radiotherapy (IMRT), the implementation of stringent quality assurance (QA) before treatment delivery is of paramount importance. Material and Methods: Two sets of VMAT plans were generated using Eclipse planning systems, one with five different complex hypothetical three-dimensional PTVs and one including three anatomical regions. The verification of these plans was performed using a MatriXX ionization chamber array embedded inside a MultiCube phantom and a Varian EPID dosimetric system attached to a Clinac iX. The plans were evaluated based on the 3%/3 mm, 2%/2 mm, and 1%/1 mm global gamma criteria and with three low-dose threshold values (0%, 10%, and 20%). Results: The gamma pass rates were above 95% in all VMAT plans, when the 3%/3mm gamma criterion was used and no threshold was applied. In both systems, the pass rates decreased as the criteria become stricter. Higher pass rates were observed when no threshold was applied and they tended to decrease for 10% and 20% thresholds. Conclusion: The results confirm the suitability of the equipments used and the validity of the plans. The study also confirmed that the threshold settings greatly affect the gamma pass rates, especially for lower gamma criteria.
NASA Astrophysics Data System (ADS)
Atiq, Maria; Atiq, Atia; Iqbal, Khalid; Shamsi, Quratul ain; Andleeb, Farah; Buzdar, Saeed Ahmad
2017-12-01
Objective: The Gamma Index is prerequisite to estimate point-by-point difference between measured and calculated dose distribution in terms of both Distance to Agreement (DTA) and Dose Difference (DD). This study aims to inquire what percentage of pixels passing a certain criteria assure a good quality plan and suggest gamma index as efficient mechanism for dose verification of Simultaneous Integrated Boost Intensity Modulated Radiotherapy plans. Method: In this study, dose was calculated for 14 head and neck patients and IMRT Quality Assurance was performed with portal dosimetry using the Eclipse treatment planning system. Eclipse software has a Gamma analysis function to compare measured and calculated dose distribution. Plans of this study were deemed acceptable when passing rate was 95% using tolerance for Distance to agreement (DTA) as 3mm and Dose Difference (DD) as 5%. Result and Conclusion: Thirteen cases pass tolerance criteria of 95% set by our institution. Confidence Limit for DD is 9.3% and for gamma criteria our local CL came out to be 2.0% (i.e., 98.0% passing). Lack of correlation was found between DD and γ passing rate with R2 of 0.0509. Our findings underline the importance of gamma analysis method to predict the quality of dose calculation. Passing rate of 95% is achieved in 93% of cases which is adequate level of accuracy for analyzed plans thus assuring the robustness of SIB IMRT treatment technique. This study can be extended to investigate gamma criteria of 5%/3mm for different tumor localities and to explore confidence limit on target volumes of small extent and simple geometry.
Hospital hand hygiene opportunities: where and when (HOW2)? The HOW2 Benchmark Study.
Steed, Connie; Kelly, J William; Blackhurst, Dawn; Boeker, Sue; Diller, Thomas; Alper, Paul; Larson, Elaine
2011-02-01
Measurement and monitoring of health care workers' hand hygiene compliance (i.e., actions/opportunities) is a key component of strategies to eliminate hospital-acquired infections. Little data exist on the expected number of hand hygiene opportunities (HHOs) in various hospital settings, however. The purpose of this study was to estimate HHOs in 2 types of hospitals--large teaching and small community--and 3 different clinical areas-medical-surgical intensive care units, general medical wards, and emergency departments. HHO data were collected through direct observations using the World Health Organization's monitoring methodology. Estimates of HHOs were developed for 12-hour AM/PM shifts and 24-hour time frames. During 436.7 hours of observation, 6,640 HHOs were identified. Estimates of HHOs ranged from 30 to 179 per patient-day on inpatient wards and from 1.84 to 5.03 per bed-hour in emergency departments. Significant differences in HHOs were found between the 2 hospital types and among the 3 clinical areas. This study is the first to use the World Health Organization's data collection methodology to estimate HHOs in general medical wards and emergency departments. These data can be used as denominator estimates to calculate hand hygiene compliance rates when product utilization data are available. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
Modal parameter identification using the log decrement method and band-pass filters
NASA Astrophysics Data System (ADS)
Liao, Yabin; Wells, Valana
2011-10-01
This paper presents a time-domain technique for identifying modal parameters of test specimens based on the log-decrement method. For lightly damped multidegree-of-freedom or continuous systems, the conventional method is usually restricted to identification of fundamental-mode parameters only. Implementation of band-pass filters makes it possible for the proposed technique to extract modal information of higher modes. The method has been applied to a polymethyl methacrylate (PMMA) beam for complex modulus identification in the frequency range 10-1100 Hz. Results compare well with those obtained using the Least Squares method, and with those previously published in literature. Then the accuracy of the proposed method has been further verified by experiments performed on a QuietSteel specimen with very low damping. The method is simple and fast. It can be used for a quick estimation of the modal parameters, or as a complementary approach for validation purposes.
Yang, Guowei; You, Shengzui; Bi, Meihua; Fan, Bing; Lu, Yang; Zhou, Xuefang; Li, Jing; Geng, Hujun; Wang, Tianshu
2017-09-10
Free-space optical (FSO) communication utilizing a modulating retro-reflector (MRR) is an innovative way to convey information between the traditional optical transceiver and the semi-passive MRR unit that reflects optical signals. The reflected signals experience turbulence-induced fading in the double-pass channel, which is very different from that in the traditional single-pass FSO channel. In this paper, we consider the corner cube reflector (CCR) as the retro-reflective device in the MRR. A general geometrical model of the CCR is established based on the ray tracing method to describe the ray trajectory inside the CCR. This ray tracing model could treat the general case that the optical beam is obliquely incident on the hypotenuse surface of the CCR with the dihedral angle error and surface nonflatness. Then, we integrate this general CCR model into the wave-optics (WO) simulation to construct the double-pass beam propagation simulation. This double-pass simulation contains the forward propagation from the transceiver to the MRR through the atmosphere, the retro-reflection of the CCR, and the backward propagation from the MRR to the transceiver, which can be realized by a single-pass WO simulation, the ray tracing CCR model, and another single-pass WO simulation, respectively. To verify the proposed CCR model and double-pass WO simulation, the effective reflection area, the incremental phase, and the reflected beam spot on the transceiver plane of the CCR are analyzed, and the numerical results are in agreement with the previously published results. Finally, we use the double-pass WO simulation to investigate the double-pass channel in the MRR FSO systems. The histograms of the turbulence-induced fading in the forward and backward channels are obtained from the simulation data and are fitted by gamma-gamma (ΓΓ) distributions. As the two opposite channels are highly correlated, we model the double-pass channel fading by the product of two correlated ΓΓ random variables (RVs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, H.L.; Spronsen, G. van; Klaus, E.H.
A simulation model of the dynamics of a by-pass pig and related two-phase flow behavior along with field trials of the pig in a dry-gas pipeline have revealed significant gains in use of a by-pass pig in modifying gas and liquid production rates. The method can widen the possibility of applying two-phase flow pipeline transportation to cases in which separator or slug-catcher capacity is limited by practicality or cost. Pigging two-phase pipelines normally generates large liquid slug volumes in front of the pig. These require large separators or slug catchers. Using a high by-pass pig to disperse the liquid andmore » reduce the maximum liquid production rate before pig arrival has been investigated by Shell Exploration and Production companies. A simulation model of the dynamics of the pig and related two-phase flow behavior in the pipeline was used to predict the performance of by-pass pigs. Field trials in a dry-gas pipeline were carried out to provide friction data and to validate the model. The predicted mobility of the high by-pass pig in the pipeline and risers was verified and the beneficial effects due to the by-pass concept exceeded the prediction of the simplified model.« less
Strategies for lowering attrition rates and raising NCLEX-RN pass rates.
Higgins, Bonnie
2005-12-01
This study was designed to determine strategies to raise the NCLEX-RN pass rate and lower the attrition rate in a community college nursing program. Ex-post facto data were collected from 213 former nursing student records. Qualitative data were collected from 10 full-time faculty, 30 new graduates, and 45 directors of associate degree nursing programs in Texas. The findings linked the academic variables of two biology courses and three components of the preadmission test to completion of the nursing program. A relationship was found between one biology course, the science component of the preadmission test, the HESI Exit Examination score, and the nursing skills course to passing the NCLEX-RN. Qualitative data indicated preadmission requirements, campus counselors, remediation, faculty, test-item writing, and teaching method were instrumental in completion of the program and passing the NCLEX-RN.
On the Concept of Varying Influence Radii for a Successive Corrections Objective Analysis
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1991-01-01
There has been a long standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in the details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method, the filter characteristics were compared for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and corrections passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameter on the initial and corrections passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.
Dynamic mask for producing uniform or graded-thickness thin films
Folta, James A [Livermore, CA
2006-06-13
A method for producing single layer or multilayer films with high thickness uniformity or thickness gradients. The method utilizes a moving mask which blocks some of the flux from a sputter target or evaporation source before it deposits on a substrate. The velocity and position of the mask is computer controlled to precisely tailor the film thickness distribution. The method is applicable to any type of vapor deposition system, but is particularly useful for ion beam sputter deposition and evaporation deposition; and enables a high degree of uniformity for ion beam deposition, even for near-normal incidence of deposition species, which may be critical for producing low-defect multilayer coatings, such as required for masks for extreme ultraviolet lithography (EUVL). The mask can have a variety of shapes, from a simple solid paddle shape to a larger mask with a shaped hole through which the flux passes. The motion of the mask can be linear or rotational, and the mask can be moved to make single or multiple passes in front of the substrate per layer, and can pass completely or partially across the substrate.
Effective method for detecting regions of given colors and the features of the region surfaces
NASA Astrophysics Data System (ADS)
Gong, Yihong; Zhang, HongJiang
1994-03-01
Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.
Simplified Method for Groundwater Treatment Using Dilution and Ceramic Filter
NASA Astrophysics Data System (ADS)
Musa, S.; Ariff, N. A.; Kadir, M. N. Abdul; Denan, F.
2016-07-01
Groundwater is one of the natural resources that is not susceptible to pollutants. However, increasing activities of municipal, industrial, agricultural or extreme land use activities have resulted in groundwater contamination as occured at the Research Centre for Soft Soil Malaysia (RECESS), Universiti Tun Hussein Onn Malaysia (UTHM). Thus, aims of this study is to treat groundwater by using rainwater and simple ceramic filter as a treatment agent. The treatment uses rain water dilution, ceramic filters and combined method of dilute and filtering as an alternate treatment which are simple and more practical compared to modern or chemical methods. The water went through dilution treatment processes able to get rid of 57% reduction compared to initial condition. Meanwhile, the water that passes through the filtering process successfully get rid of as much as 86% groundwater parameters where only chloride does not pass the standard. Favorable results for the combination methods of dilution and filtration methods that can succesfully eliminate 100% parameters that donot pass the standards of the Ministry of Health and the Interim National Drinking Water Quality Standard such as those found in groundwater in RECESS, UTHM especially sulfate and chloride. As a result, it allows the raw water that will use clean drinking water and safe. It also proves that the method used in this study is very effective in improving the quality of groundwater.
NASA Astrophysics Data System (ADS)
Brewe, Eric; Dou, Remy; Shand, Robert
2018-02-01
Although active learning is supported by strong evidence of efficacy in undergraduate science instruction, institutions of higher education have yet to embrace comprehensive change. Costs of transforming instruction are regularly cited as a key factor in not adopting active-learning instructional practices. Some cite that alternative methods to stadium-style, lecture-based education are not financially viable to an academic department. This paper examines that argument by presenting an ingredients approach to estimating costs of two instructional methods used in introductory university physics courses at a large public U.S. university. We use a metric common in educational economics, cost effectiveness (CE), which is the total cost per student passing the class. We then compare the CE of traditional, passive-learning lecture courses to those of a well-studied, active-learning curriculum (Modeling Instruction) as a way of evaluating the claim that active learning is cost prohibitive. Our findings are that the Modeling Instruction approach has a higher cost per passing student (MI = 1 ,030 /passing student vs Trad = 790 /passing student). These results are discussed from perspectives of university administrators, students, and taxpayers. We consider how MI would need to adapt in order to make the benefits of active learning (particularly higher pass rates and gains on multiple measured student outcomes) available in a cost-neutral setting. This approach aims to provide a methodology to better inform decision makers balancing financial, personnel, and curricular considerations.
Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution
Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.
2005-01-01
We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.
NASA Astrophysics Data System (ADS)
Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang
2017-01-01
The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.
Plenoptic image watermarking to preserve copyright
NASA Astrophysics Data System (ADS)
Ansari, A.; Dorado, A.; Saavedra, G.; Martinez Corral, M.
2017-05-01
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
System and method for disrupting suspect objects
Gladwell, T. Scott; Garretson, Justin R; Hobart, Clinton G; Monda, Mark J
2013-07-09
A system and method for disrupting at least one component of a suspect object is provided. The system includes a source for passing radiation through the suspect object, a screen for receiving the radiation passing through the suspect object and generating at least one image therefrom, a weapon having a discharge deployable therefrom, and a targeting unit. The targeting unit displays the image(s) of the suspect object and aims the weapon at a disruption point on the displayed image such that the weapon may be positioned to deploy the discharge at the disruption point whereby the suspect object is disabled.
ERIC Educational Resources Information Center
Warfvinge, Per
2008-01-01
The ECTS grade transfer scale is an interface grade scale to help European universities, students and employers to understand the level of student achievement. Hence, the ECTS scale can be seen as an interface, transforming local scales to a common system where A-E denote passing grades. By definition, ECTS should distribute the passing students…
Implementing statistical equating for MRCP(UK) Parts 1 and 2.
McManus, I C; Chis, Liliana; Fox, Ray; Waller, Derek; Tang, Peter
2014-09-26
The MRCP(UK) exam, in 2008 and 2010, changed the standard-setting of its Part 1 and Part 2 examinations from a hybrid Angoff/Hofstee method to statistical equating using Item Response Theory, the reference group being UK graduates. The present paper considers the implementation of the change, the question of whether the pass rate increased amongst non-UK candidates, any possible role of Differential Item Functioning (DIF), and changes in examination predictive validity after the change. Analysis of data of MRCP(UK) Part 1 exam from 2003 to 2013 and Part 2 exam from 2005 to 2013. Inspection suggested that Part 1 pass rates were stable after the introduction of statistical equating, but showed greater annual variation probably due to stronger candidates taking the examination earlier. Pass rates seemed to have increased in non-UK graduates after equating was introduced, but was not associated with any changes in DIF after statistical equating. Statistical modelling of the pass rates for non-UK graduates found that pass rates, in both Part 1 and Part 2, were increasing year on year, with the changes probably beginning before the introduction of equating. The predictive validity of Part 1 for Part 2 was higher with statistical equating than with the previous hybrid Angoff/Hofstee method, confirming the utility of IRT-based statistical equating. Statistical equating was successfully introduced into the MRCP(UK) Part 1 and Part 2 written examinations, resulting in higher predictive validity than the previous Angoff/Hofstee standard setting. Concerns about an artefactual increase in pass rates for non-UK candidates after equating were shown not to be well-founded. Most likely the changes resulted from a genuine increase in candidate ability, albeit for reasons which remain unclear, coupled with a cognitive illusion giving the impression of a step-change immediately after equating began. Statistical equating provides a robust standard-setting method, with a better theoretical foundation than judgemental techniques such as Angoff, and is more straightforward and requires far less examiner time to provide a more valid result. The present study provides a detailed case study of introducing statistical equating, and issues which may need to be considered with its introduction.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN
2008-07-29
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN
2008-07-08
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.
2007-12-18
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes
Molden, Sturla; Moldestad, Olve; Storm, Johan F.
2013-01-01
Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714
NASA Technical Reports Server (NTRS)
Wier, Larry T.; Jackson, Allen W.; Jackson, Andrew S.
2009-01-01
The physical activity guidelines (PAG) established by the US Dept. of Health and Human Services in 2008 is consistent with a rating of >/= 6 on the 11-point NASA Physical Activity Status Scale (PASS). Wier, et. al. developed non-exercise models for estimating VO2(sub max) from a combination of PASS, age, gender and either waist girth (WG) (R = 0.810, SEE= 4.799 ml/kg/min), %Fat (R = 0. 817, SEE = 4.716 ml/kg/min) or BMI (R = 0.802, SEE = 4.900 ml . kg-1. min -1 ). PURPOSE: to develop non-exercise models to estimate VO2max from age, gender, body composition (WG, %Fat, BMI) and PASS dichotomized at meets or does not meet the PAG (PAG-PASS), and to compare the accuracy of the PAG-PASS models with the models using the 11-point PASS. METHODS: 2417 men and 384 women were measured for VO2max by indirect calorimetry (RER >1.1); age (yr), gender by M = 1, W = 0; WG at the umbilicus; %fat by skin-folds, BMI by weight (kg) divided by height squared (m 2 ) , and PAGPASS by PASS < 6 = 0 and =/> 6 = 1. RESULTS: Three models were developed by multiple regression to estimate VO2(sub max) from age, gender, PAG-PASS and either WG (R = 0.790, SEE=5.019 ml/kg/min), %FAT (R= 0.080, SEE = 4.915 ml/kg/min) or BMI (R = 0.777, SEE = 5.162ml/kg/min). Cross-validation by the PRESS technique confirmed these statistics. Simple correlations between measured VO2(sub max) and estimates from the PAG-PASS models with WG, %Fat and BMI were 0.790, 0.800 and 0.777, minimally different from the correlations obtained with the PASS models (0.810, 0.810, and 0.802). PAG-PASS and PASS model constant errors were also similar: < 1 ml/kg/min for subsamples of age, gender, PASS and for VO2(sub max) between 30 and 50 ml/kg/min (70% of the sample) but > 1 ml/kg/min for VO2(sub max) <30 and >50 ml/kg/min. CONCLUSIONS: Non-exercise models using the combined effects of age, gender, body composition and the dichotomized PAG-PASS provide estimates of VO2(sub max) that are accurate for most adults, and the accuracy of these models are similar to previously published models using the 11-point PASS.
Hydrodynamic interaction of two deformable drops in confined shear flow.
Chen, Yongping; Wang, Chengyao
2014-09-01
We investigate hydrodynamic interaction between two neutrally buoyant circular drops in a confined shear flow based on a computational fluid dynamics simulation using the volume-of-fluid method. The rheological behaviors of interactive drops and the flow regimes are explored with a focus on elucidation of underlying physical mechanisms. We find that two types of drop behaviors during interaction occur, including passing-over motion and reversing motion, which are governed by the competition between the drag of passing flow and the entrainment of reversing flow in matrix fluid. With the increasing confinement, the drop behavior transits from the passing-over motion to reversing motion, because the entrainment of the reversing-flow matrix fluid turns to play the dominant role. The drag of the ambient passing flow is increased by enlarging the initial lateral separation due to the departure of the drop from the reversing flow in matrix fluid, resulting in the emergence of passing-over motion. In particular, a corresponding phase diagram is plotted to quantitatively illustrate the dependence of drop morphologies during interaction on confinement and initial lateral separation.
NASA Astrophysics Data System (ADS)
Lyubimov, V. V.; Kurkina, E. V.
2018-05-01
The authors consider the problem of a dynamic system passing through a low-order resonance, describing an uncontrolled atmospheric descent of an asymmetric nanosatellite in the Earth's atmosphere. The authors perform mathematical and numerical modeling of the motion of the nanosatellite with a small mass-aerodynamic asymmetry relative to the center of mass. The aim of the study is to obtain new reliable approximate analytical estimates of perturbations of the angle of attack of a nanosatellite passing through resonance at angles of attack of not more than 0.5π. By using the stationary phase method, the authors were able to investigate a discontinuous perturbation in the angle of attack of a nanosatellite passing through a resonance with two different nanosatellite designs. Comparison of the results of the numerical modeling and new approximate analytical estimates of the perturbation of the angle of attack confirms the reliability of the said estimates.
The Odds of Success: Predicting Registered Health Information Administrator Exam Success
Dolezel, Diane; McLeod, Alexander
2017-01-01
The purpose of this study was to craft a predictive model to examine the relationship between grades in specific academic courses, overall grade point average (GPA), on-campus versus online course delivery, and success in passing the Registered Health Information Administrator (RHIA) exam on the first attempt. Because student success in passing the exam on the first attempt is assessed as part of the accreditation process, this study is important to health information management (HIM) programs. Furthermore, passing the exam greatly expands the graduate's job possibilities because the demand for credentialed graduates far exceeds the supply of credentialed graduates. Binary logistic regression was utilized to explore the relationships between the predictor variables and success in passing the RHIA exam on the first attempt. Results indicate that the student's cumulative GPA, specific HIM course grades, and course delivery method were predictive of success. PMID:28566994
NASA Astrophysics Data System (ADS)
Abedian, A.; Poursina, M.; Golestanian, H.
2007-05-01
Radial forging is an open die forging process used for reducing the diameter of shafts, tubes, stepped shafts and axels, and creating internal profiles for tubes such as rifling of gun barrels. In this work, a comprehensive study of multi-pass hot radial forging of short hollow and solid products are presented using 2-D axisymmetric finite element simulation. The workpiece is modeled as an elastic-viscoplastic material. A mixture of Coulomb law and constant limit shear is used to model the die-workpiece and mandrel-workpiece contacts. Thermal effects are also taken in to account. Three-pass radial forging of solid cylinders and tube products are considered. Temperature, stress, strain and metal flow distribution are obtained in each pass through thermo-mechanical simulation. The numerical results are compared with available experimental data and are in good agreement with them.
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, B; Siebers, J
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: Withmore » a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm{sup 2} field can have a 95% passing rate when an 8 cm{sup 2}=2.8×2.8 cm{sup 2} highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.« less
NASA Astrophysics Data System (ADS)
Jena, D. P.; Panigrahi, S. N.
2016-03-01
Requirement of designing a sophisticated digital band-pass filter in acoustic based condition monitoring has been eliminated by introducing a passive acoustic filter in the present work. So far, no one has attempted to explore the possibility of implementing passive acoustic filters in acoustic based condition monitoring as a pre-conditioner. In order to enhance the acoustic based condition monitoring, a passive acoustic band-pass filter has been designed and deployed. Towards achieving an efficient band-pass acoustic filter, a generalized design methodology has been proposed to design and optimize the desired acoustic filter using multiple filter components in series. An appropriate objective function has been identified for genetic algorithm (GA) based optimization technique with multiple design constraints. In addition, the sturdiness of the proposed method has been demonstrated in designing a band-pass filter by using an n-branch Quincke tube, a high pass filter and multiple Helmholtz resonators. The performance of the designed acoustic band-pass filter has been shown by investigating the piston-bore defect of a motor-bike using engine noise signature. On the introducing a passive acoustic filter in acoustic based condition monitoring reveals the enhancement in machine learning based fault identification practice significantly. This is also a first attempt of its own kind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, T.; Hagihara, R.; Yugo, M.
1994-12-31
The authors have successfully developed and industrialized a new frequency-shift anti-islanding protection method using a twin-peak band-pass filter (BPF) for grid-interconnected photovoltaic (PV) systems. In this method, the power conditioner has a twin-peak BPF in a current feed back loop in place of the normal BPF. The new method works perfectly for various kinds of loads such as resistance, inductive and capacitive loads connected to the PV system. Furthermore, because there are no mis-detections, the system enables the most effective generation of electric energy from solar cells. A power conditioner equipped with this protection was officially certified as suitable formore » grid-interconnection.« less
ERIC Educational Resources Information Center
Holley, Hope D.
2017-01-01
Despite research that high-stakes tests do not improve knowledge, Florida requires students to pass an Algebra I End-of-Course exam (EOC) to earn a high school diploma. Test passing scores are determined by a raw score to t-score to scale score analysis. This method ultimately results as a comparative test model where students' passage is…
Characterization and Simulation of Transient Vibrations Using Band Limited Temporal Moments
Smallwood, David O.
1994-01-01
A method is described to characterize shocks (transient time histories) in terms of the Fourier energy spectrum and the temporal moments of the shock passed through a contiguous set of band pass filters. The product model is then used to generate of a random process as simulations that in the mean will have the same energy and moments as the characterization of the transient event.
Fabrication of glass microspheres with conducting surfaces
Elsholz, William E.
1984-01-01
A method for making hollow glass microspheres with conducting surfaces by adding a conducting vapor to a region of the glass fabrication furnace. As droplets or particles of glass forming material pass through multiple zones of different temperature in a glass fabrication furnace, and are transformed into hollow glass microspheres, the microspheres pass through a region of conducting vapor, forming a conducting coating on the surface of the microspheres.
Fabrication of glass microspheres with conducting surfaces
Elsholz, W.E.
1982-09-30
A method for making hollow glass microspheres with conducting surfaces by adding a conducting vapor to a region of the glass fabrication furnace. As droplets or particles of glass forming material pass through multiple zones of different temperature in a glass fabrication furnace, and are transformed into hollow glass microspheres, the microspheres pass through a region of conducting vapor, forming a conducting coating on the surface of the microspheres.
Systems and methods for circuit lifetime evaluation
NASA Technical Reports Server (NTRS)
Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)
2013-01-01
Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.
Method and apparatus for nitrogen oxide determination
Hohorst, Frederick A.
1990-01-01
Method and apparatus for determining nitrogen oxide content in a high temperature process gas, which involves withdrawing a sample portion of a high temperature gas containing nitrogen oxide from a source to be analyzed. The sample portion is passed through a restrictive flow conduit, which may be a capillary or a restriction orifice. The restrictive flow conduit is heated to a temperature sufficient to maintain the flowing sample portion at an elevated temperature at least as great as the temperature of the high temperature gas source, to thereby provide that deposition of ammonium nitrate within the restrictive flow conduit cannot occur. The sample portion is then drawn into an aspirator device. A heated motive gas is passed to the aspirator device at a temperature at least as great as the temperature of the high temperature gas source. The motive gas is passed through the nozzle of the aspirator device under conditions sufficient to aspirate the heated sample portion through the restrictive flow conduit and produce a mixture of the sample portion in the motive gas at a dilution of the sample portion sufficient to provide that deposition of ammonium nitrate from the mixture cannot occur at reduced temperature. A portion of the cooled dilute mixture is then passed to analytical means capable of detecting nitric oxide.
Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
Planar solid oxide fuel cell with staged indirect-internal air and fuel preheating and reformation
Geisbrecht, Rodney A; Williams, Mark C
2003-10-21
A solid oxide fuel cell arrangement and method of use that provides internal preheating of both fuel and air in order to maintain the optimum operating temperature for the production of energy. The internal preheat passes are created by the addition of two plates, one on either side of the bipolar plate, such that these plates create additional passes through the fuel cell. This internal preheat fuel cell configuration and method reduce the requirements for external heat exchanger units and air compressors. Air or fuel may be added to the fuel cell as required to maintain the optimum operating temperature through a cathode control valve or an anode control valve, respectively. A control loop comprises a temperature sensing means within the preheat air and fuel passes, a means to compare the measured temperature to a set point temperature and a determination based on the comparison as to whether the control valves should allow additional air or fuel into the preheat or bypass manifolds of the fuel cell.
Method to improve optical parametric oscillator beam quality
Smith, Arlee V.; Alford, William J.; Bowers, Mark S.
2003-11-11
A method to improving optical parametric oscillator (OPO) beam quality having an optical pump, which generates a pump beam at a pump frequency greater than a desired signal frequency, a nonlinear optical medium oriented so that a signal wave at the desired signal frequency and a corresponding idler wave are produced when the pump beam (wave) propagates through the nonlinear optical medium, resulting in beam walk off of the signal and idler waves, and an optical cavity which directs the signal wave to repeatedly pass through the nonlinear optical medium, said optical cavity comprising an equivalently even number of non-planar mirrors that produce image rotation on each pass through the nonlinear optical medium. Utilizing beam walk off where the signal wave and said idler wave have nonparallel Poynting vectors in the nonlinear medium and image rotation, a correlation zone of distance equal to approximately .rho.L.sub.crystal is created which, through multiple passes through the nonlinear medium, improves the beam quality of the OPO output.
Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.
2012-11-06
An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.
Passing messages between biological networks to refine predicted interactions.
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, A.; Frenkel, J.; Hopf, R.
Amyloidosis is a systemic disease frequently involving the myocardium and leading to functional disturbances of the heart. Amyloidosis can mimic other cardiac diseases. A conclusive clinical diagnosis of cardiac involvement can only be made by a combination of different diagnostic methods. In 7 patients with myocardial amyloidosis we used a combined first-pass and static scintigraphy with technetium-99 m-pyrophosphate. There was only insignificant myocardial uptake of the tracer. The first-pass studies however revealed reduced systolic function in 4/7 patients and impaired diastolic function in 6/7 patients. Therefore, although cardiac amyloid could not be demonstrated in the static scintigraphy due to amyloidmore » fibril amount and composition, myocardial functional abnormalities were seen in the first-pass study.« less
Recuperated atmospheric SOFC/gas turbine hybrid cycle
Lundberg, Wayne
2010-05-04
A method of operating an atmospheric-pressure solid oxide fuel cell generator (6) in combination with a gas turbine comprising a compressor (1) and expander (2) where an inlet oxidant (20) is passed through the compressor (1) and exits as a first stream (60) and a second stream (62) the first stream passing through a flow control valve (56) to control flow and then through a heat exchanger (54) followed by mixing with the second stream (62) where the mixed streams are passed through a combustor (8) and expander (2) and the first heat exchanger for temperature control before entry into the solid oxide fuel cell generator (6), which generator (6) is also supplied with fuel (40).
Recuperated atmosphere SOFC/gas turbine hybrid cycle
Lundberg, Wayne
2010-08-24
A method of operating an atmospheric-pressure solid oxide fuel cell generator (6) in combination with a gas turbine comprising a compressor (1) and expander (2) where an inlet oxidant (20) is passed through the compressor (1) and exits as a first stream (60) and a second stream (62) the first stream passing through a flow control valve (56) to control flow and then through a heat exchanger (54) followed by mixing with the second stream (62) where the mixed streams are passed through a combustor (8) and expander (2) and the first heat exchanger for temperature control before entry into the solid oxide fuel cell generator (6), which generator (6) is also supplied with fuel (40).
Gheza, Federico; Raimondi, Paolo; Solaini, Leonardo; Coccolini, Federico; Baiocchi, Gian Luca; Portolani, Nazario; Tiberio, Guido Alberto Massimo
2018-04-11
Outside the US, FLS certification is not required and its teaching methods are not well standardized. Even if the FLS was designed as "stand alone" training system, most of Academic Institution offer support to residents during training. We present the first systematic application of FLS in Italy. Our aim was to evaluate the role of mentoring/coaching on FLS training in terms of the passing rate and global performance in the search for resource optimization. Sixty residents in general surgery, obstetrics & gynecology, and urology were selected to be enrolled in a randomized controlled trial, practicing FLS with the goal of passing a simulated final exam. The control group practiced exclusively with video material from SAGES, whereas the interventional group was supported by a mentor. Forty-six subjects met the requirements and completed the trial. For the other 14 subjects no results are available for comparison. One subject for each group failed the exam, resulting in a passing rate of 95.7%, with no obvious differences between groups. Subgroup analysis did not reveal any difference between the groups for FLS tasks. We confirm that methods other than video instruction and deliberate FLS practice are not essential to pass the final exam. Based on these results, we suggest the introduction of the FLS system even where a trained tutor is not available. This trial is the first single institution application of the FLS in Italy and one of the few experiences outside the US. Trial Number: NCT02486575 ( https://www.clinicaltrials.gov ).
Mortazavi, Forough; Mortazavi, Saideh S.; Khosrorad, Razieh
2015-01-01
Background: Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. Objectives: The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. Patients and Methods: In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach’s alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students’ average scores in the previous semester. Results: The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ2 /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ2/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students’ average scores (r = -0.131, P =0.02). Conclusions: The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students. PMID:26473078
Adam, Ahmed
2017-01-01
Objective To describe a simple, novel method to achieve ureteric access in the Cohen crossed reimplanted ureter, which will allow retrograde working access via the conventional transurethral method. Materials and Methods Under cystoscopic vision, suprapubic needle puncture was performed. The needle was directed (bevel facing) towards the desired ureteric orifice (UO). A guidewire (with a floppy-tip) was then inserted into the suprapubic needle passing into the bladder, and then easily passed into the crossed-reimplanted UO. The distal end of the guidewire was then removed through the urethra with cystoscopic grasping forceps. The straightened ureter then easily facilitated ureteroscopy access, retrograde pyelogram studies, and JJ stent insertion in a conventional transurethral method. Results The UO and ureter were aligned in a more conventional orthotopic course, to allow for conventional transurethral working access. Conclusion A novel method to access the Cohen crossed reimplanted ureter was described. All previously published methods of accessing the crossed ureter were critically appraised. PMID:29463976
An alternative method of closed silicone intubation of the lacrimal system.
Henderson, P N; McNab, A A
1996-05-01
An alternative method of closed lacrimal intubation is described, the basis of which is to place the end of a piece of silicone tubing over the end of a small-diameter metal introducer, stretch the silicone tubing back along the introducer, and then pass the introducer together with the tubing through the lacrimal system into the nasal cavity. The tubing is visualized in the inferior meatus, from where it is retrieved, and then the introducer is withdrawn. The other end of the tubing is passed in a similar fashion. The technique is easily mastered, inexpensive, and less traumatic than other described techniques.
The narrow pass band filter of tunable 1D phononic crystals with a dielectric elastomer layer
NASA Astrophysics Data System (ADS)
Wu, Liang-Yu; Wu, Mei-Ling; Chen, Lien-Wen
2009-01-01
In this paper, we study the defect bands of a 1D phononic crystal consisting of aluminum (Al) and polymethyl methacrylate (PMMA) layers with a dielectric elastomer (DE) defect layer. The plane wave expansion (PWE) method and supercell calculation are used to calculate the band structure and the defect bands. The transmission spectra are obtained using the finite element method (FEM). Since the thickness of the dielectric elastomer defect layer is controlled by applying an electric voltage, the frequencies of the defect bands can be tuned. A narrow pass band filter can be developed and designed by using the dielectric elastomer.
Quantum cluster variational method and message passing algorithms revisited
NASA Astrophysics Data System (ADS)
Domínguez, E.; Mulet, Roberto
2018-02-01
We present a general framework to study quantum disordered systems in the context of the Kikuchi's cluster variational method (CVM). The method relies in the solution of message passing-like equations for single instances or in the iterative solution of complex population dynamic algorithms for an average case scenario. We first show how a standard application of the Kikuchi's CVM can be easily translated to message passing equations for specific instances of the disordered system. We then present an "ad hoc" extension of these equations to a population dynamic algorithm representing an average case scenario. At the Bethe level, these equations are equivalent to the dynamic population equations that can be derived from a proper cavity ansatz. However, at the plaquette approximation, the interpretation is more subtle and we discuss it taking also into account previous results in classical disordered models. Moreover, we develop a formalism to properly deal with the average case scenario using a replica-symmetric ansatz within this CVM for quantum disordered systems. Finally, we present and discuss numerical solutions of the different approximations for the quantum transverse Ising model and the quantum random field Ising model in two-dimensional lattices.
Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis
NASA Astrophysics Data System (ADS)
Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.
2017-06-01
Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.
System and method for chromatography and electrophoresis using circular optical scanning
Balch, Joseph W.; Brewer, Laurence R.; Davidson, James C.; Kimbrough, Joseph R.
2001-01-01
A system and method is disclosed for chromatography and electrophoresis using circular optical scanning. One or more rectangular microchannel plates or radial microchannel plates has a set of analysis channels for insertion of molecular samples. One or more scanning devices repeatedly pass over the analysis channels in one direction at a predetermined rotational velocity and with a predetermined rotational radius. The rotational radius may be dynamically varied so as to monitor the molecular sample at various positions along a analysis channel. Sample loading robots may also be used to input molecular samples into the analysis channels. Radial microchannel plates are built from a substrate whose analysis channels are disposed at a non-parallel angle with respect to each other. A first step in the method accesses either a rectangular or radial microchannel plate, having a set of analysis channels, and second step passes a scanning device repeatedly in one direction over the analysis channels. As a third step, the scanning device is passed over the analysis channels at dynamically varying distances from a centerpoint of the scanning device. As a fourth step, molecular samples are loaded into the analysis channels with a robot.
Cavailloles, F; Bazin, J P; Capderou, A; Valette, H; Herbert, J L; Di Paola, R
1987-05-01
A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independant method provides more objective and reproducible results. A number of parameters of the cardio-pulmonary function can be assessed by first-pass radionuclide angiocardiography (RNA) [1,2]. Usually, they are calculated using time-activity curves (TAC) from regions of interest (ROI) drawn on the cardiac chambers and the lungs. This method has two main drawbacks: (1) the lack of inter and intra-observers reproducibility; (2) the problem of crosstalk which affects the evaluation of the cardio-pulmonary performance. The crosstalk on planar imaging is due to anatomical superimposition of the cardiac chambers and lungs. The activity measured in any ROI is the sum of the activity in several organs and 'decontamination' of the TAC cannot easily be performed using the ROI method [3]. Factor analysis of dynamic structures (FADS) [4,5] can solve the two problems mentioned above. It provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. The resulting factors are estimates of the time evolution of the activity in each structure (underlying physiological components), and the associated factor images are estimates of the spatial distribution of each factor. The aim of this study was to assess the reliability of FADS in first pass RNA and compare the results to those obtained by the ROI method which is generally considered as the routine procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A
2016-06-15
Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less
A Comparative Study of Standard-Setting Methods.
ERIC Educational Resources Information Center
Livingston, Samuel A.; Zieky, Michael J.
1989-01-01
The borderline group standard-setting method (BGSM), Nedelsky method (NM), and Angoff method (AM) were compared, using reading scores for 1,948 and mathematics scores for 2,191 sixth through ninth graders. The NM and AM were inconsistent with the BGSM. Passing scores were higher where students were more able. (SLD)
Fuel cell with ionization membrane
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
2007-01-01
A fuel cell is disclosed comprising an ionization membrane having at least one area through which gas is passed, and which ionizes the gas passing therethrough, and a cathode for receiving the ions generated by the ionization membrane. The ionization membrane may include one or more openings in the membrane with electrodes that are located closer than a mean free path of molecules within the gas to be ionized. Methods of manufacture are also provided.
2011-05-02
researchers define only two distinct methods. In a conserved spread network, the quantity of information that enters a network remains constant at...any given time. In such a network, information can only be in one place at any given time. A classic example is that ofa pitcher of water poured into...serious of pipes and joints; once the water passes through one pipe, that pipe empties while others in the network are filled with water as it passes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, Hirotaka; McKenna, Sean Andrew; Coburn, Timothy C.
2004-07-01
Geostatistical and non-geostatistical noise filtering methodologies, factorial kriging and a low-pass filter, and a region growing method are applied to analytic signal magnetometer images at two UXO contaminated sites to delineate UXO target areas. Overall delineation performance is improved by removing background noise. Factorial kriging slightly outperforms the low-pass filter but there is no distinct difference between them in terms of finding anomalies of interest.
NEUTRON FLUX INTENSITY DETECTION
Russell, J.T.
1964-04-21
A method of measuring the instantaneous intensity of neutron flux in the core of a nuclear reactor is described. A target gas capable of being transmuted by neutron bombardment to a product having a resonance absorption line nt a particular microwave frequency is passed through the core of the reactor. Frequency-modulated microwave energy is passed through the target gas and the attenuation of the energy due to the formation of the transmuted product is measured. (AEC)
Apparatus and method to compensate for refraction of radiation
Allen, Gary R.; Moskowitz, Philip E.
1990-01-01
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided.
Apparatus and method to compensate for refraction of radiation
Allen, G.R.; Moskowitz, P.E.
1990-03-27
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided. 4 figs.
Wang, Haoran; Wang, Mingxiu; Cheng, Qiang
2018-03-08
Detection of complex splice sites (SSs) and polyadenylation sites (PASs) of eukaryotic genes is essential for the elucidation of gene regulatory mechanisms. Transcriptome-wide studies using high-throughput sequencing (HTS) have revealed prevalent alternative splicing (AS) and alternative polyadenylation (APA) in plants. However, small-scale and high-depth HTS aimed at detecting genes or gene families are very few and limited. We explored a convenient and flexible method for profiling SSs and PASs, which combines rapid amplification of 3'-cDNA ends (3'-RACE) and HTS. Fourteen NAC (NAM, ATAF1/2, CUC2) transcription factor genes of Populus trichocarpa were analyzed by 3'-RACE-seq. Based on experimental reproducibility, boundary sequence analysis and reverse transcription PCR (RT-PCR) verification, only canonical SSs were considered to be authentic. Based on stringent criteria, candidate PASs without any internal priming features were chosen as authentic PASs and assumed to be PAS-rich markers. Thirty-four novel canonical SSs, six intronic/internal exons and thirty 3'-UTR PAS-rich markers were revealed by 3'-RACE-seq. Using 3'-RACE and real-time PCR, we confirmed that three APA transcripts ending in/around PAS-rich markers were differentially regulated in response to plant hormones. Our results indicate that 3'-RACE-seq is a robust and cost-effective method to discover SSs and label active regions subjected to APA for genes or gene families. The method is suitable for small-scale AS and APA research in the initial stage.
Methods of separating short half-life radionuclides from a mixture of radionuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bray, L.A.; Ryan, J.L.
1998-09-15
The present invention is a method of obtaining a radionuclide product selected from the group consisting of {sup 223}Ra and {sup 225}Ac, from a radionuclide ``cow`` of {sup 227}Ac or {sup 229}Th respectively. The method comprises the steps of (a) permitting ingrowth of at least one radionuclide daughter from said radionuclide ``cow`` forming an ingrown mixture; (b) insuring that the ingrown mixture is a nitric acid ingrown mixture; (c) passing the nitric acid ingrown mixture through a first nitrate form ion exchange column which permits separating the ``cow`` from at least one radionuclide daughter; (d) insuring that the at leastmore » one radionuclide daughter contains the radionuclide product; (e) passing the at least one radionuclide daughter through a second ion exchange column and separating the at least one radionuclide daughter from the radionuclide product and (f) recycling the at least one radionuclide daughter by adding it to the ``cow``. In one embodiment the radionuclide ``cow`` is the {sup 227}Ac, the at least one daughter radionuclide is a {sup 227}Th and the product radionuclide is the {sup 223}Ra and the first nitrate form ion exchange column passes the {sup 227}Ac and retains the {sup 227}Th. In another embodiment the radionuclide ``cow`` is the {sup 229}Th, the at least one daughter radionuclide is a {sup 225}Ra and said product radionuclide is the {sup 225}Ac and the {sup 225}Ac and nitrate form ion exchange column retains the {sup 229}Th and passes the {sup 225}Ra/Ac. 8 figs.« less
Methods of separating short half-life radionuclides from a mixture of radionuclides
Bray, Lane A.; Ryan, Jack L.
1998-01-01
The present invention is a method of obtaining a radionuclide product selected from the group consisting of .sup.223 Ra and .sup.225 Ac, from a radionuclide "cow" of .sup.227 Ac or .sup.229 Th respectively. The method comprises the steps of a) permitting ingrowth of at least one radionuclide daughter from said radionuclide "cow" forming an ingrown mixture; b) insuring that the ingrown mixture is a nitric acid ingrown mixture; c) passing the nitric acid ingrown mixture through a first nitrate form ion exchange column which permits separating the "cow" from at least one radionuclide daughter; d) insuring that the at least one radionuclide daughter contains the radionuclide product; e) passing the at least one radionuclide daughter through a second ion exchange column and separating the at least one radionuclide daughter from the radionuclide product and f) recycling the at least one radionuclide daughter by adding it to the "cow". In one embodiment the radionuclide "cow" is the .sup.227 Ac, the at least one daughter radionuclide is a .sup.227 Th and the product radionuclide is the .sup.223 Ra and the first nitrate form ion exchange column passes the .sup.227 Ac and retains the .sup.227 Th. In another embodiment the radionuclide "cow"is the .sup.229 Th, the at least one daughter radionuclide is a .sup.225 Ra and said product radionuclide is the .sup.225 Ac and the .sup.225 Ac and nitrate form ion exchange column retains the .sup.229 Th and passes the .sup.225 Ra/Ac.
Methods of separating short half-life radionuclides from a mixture of radionuclides
Bray, L.A.; Ryan, J.L.
1998-09-15
The present invention is a method of obtaining a radionuclide product selected from the group consisting of {sup 223}Ra and {sup 225}Ac, from a radionuclide ``cow`` of {sup 227}Ac or {sup 229}Th respectively. The method comprises the steps of (a) permitting ingrowth of at least one radionuclide daughter from said radionuclide ``cow`` forming an ingrown mixture; (b) insuring that the ingrown mixture is a nitric acid ingrown mixture; (c) passing the nitric acid ingrown mixture through a first nitrate form ion exchange column which permits separating the ``cow`` from at least one radionuclide daughter; (d) insuring that the at least one radionuclide daughter contains the radionuclide product; (e) passing the at least one radionuclide daughter through a second ion exchange column and separating the at least one radionuclide daughter from the radionuclide product and (f) recycling the at least one radionuclide daughter by adding it to the ``cow``. In one embodiment the radionuclide ``cow`` is the {sup 227}Ac, the at least one daughter radionuclide is a {sup 227}Th and the product radionuclide is the {sup 223}Ra and the first nitrate form ion exchange column passes the {sup 227}Ac and retains the {sup 227}Th. In another embodiment the radionuclide ``cow`` is the {sup 229}Th, the at least one daughter radionuclide is a {sup 225}Ra and said product radionuclide is the {sup 225}Ac and the {sup 225}Ac and nitrate form ion exchange column retains the {sup 229}Th and passes the {sup 225}Ra/Ac. 8 figs.
Puyraimond-Zemmour, Déborah; Etcheto, Adrien; Fautrel, Bruno; Balanescu, Andra; de Wit, Maarten; Heiberg, Turid; Otsa, Kati; Kvien, Tore K; Dougados, Maxime; Gossec, Laure
2017-10-01
To explore the link between a patient acceptable symptom state (PASS) and patient-perceived impact in rheumatoid arthritis (RA) and psoriatic arthritis (PsA). This was a cross-sectional study of unselected patients with definite RA or PsA. Pain, functional capacity, fatigue, coping, and sleep disturbance were assessed using a numeric rating scale (0-10) and compared between patients in PASS or not (Cohen's effect sizes). The domains of health associated with PASS status were assessed by multivariate forward logistic regression, and PASS thresholds were determined using the 75th percentile method and receiver operating characteristic analyses. Among 977 patients (531 with RA, 446 with PsA), the mean ± SD age was 53.4 ± 13.2 years, mean ± SD disease duration was 11.2 ± 10.0 years, and 637 (65.8%) were women. In all, 595 patients (60.9%) were in PASS; they had lower symptom levels, and all domains of health except sleep disturbance discriminated clearly between patients in PASS or not (effect sizes 0.73-1.45 in RA and 0.82-1.52 in PsA). In multivariate analysis, less pain and better coping were predictive of being in PASS. Odds ratios were: RA pain 0.80 (95% confidence interval [95% CI] 0.67-0.96), PsA pain 0.63 (95% CI 0.52-0.75), RA coping 0.84 (95% CI 0.74-0.96), and PsA coping 0.83 (95% CI 0.71-0.97). The cutoffs of symptom intensity (range 0-10), corresponding to PASS for the 5 domains of health and the 2 diseases were similar, i.e., approximately 4-5. In RA and PsA, PASS was associated with the 5 domains of health analyzed, and in particular with less pain and better coping. © 2016, American College of Rheumatology.
Equal Channel Angular Pressing (ECAP) and Its Application to Grain Refinement of Al-Zn-Mg-Cu Alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekeli, Sueleyman; Gueral, Ahmet
Microstructure of a metal can be considerably changed by severe plastic deformation techniques such as high pressure torsion, extrusion and equal-channel angular pressing (ECAP). Among these methods, ECAP is particularly attractive because it has a potential for introducing significant grain refinement and homogeneous microstructure into bulk materials. Typically, it reduces the grain size to the submicrometer level or even nanometer range and thus produces materials that are capable of exhibiting unusual mechanical properties. In the present study, a test unites for equal channel angular pressing was constructed and this system was used for Al-Zn-Mg-Cu alloy. After the optimization tests, itmore » was seen that the most effective lubricant for the dies was MoS{sub 2}, the pressing pressure was around 25-35 ton and the pressing speed was 2 mm/s. By using these parameters, the Al-Zn-Mg-Cu alloy was successfully ECAPed up to 14 passes at 200 deg. C using route C. After ECAP tests, the specimens were characterized by transmission electron microscope (TEM), hardness and macrostructural investigations. It was seen that the plastic deformation in the ECAPed specimens occurred from edge to the centre like whirlpool. In addition, the deformation intensity increased with increasing pass number. The grain size of the specimens effectively also decreased with increasing pass number. That is, while the grain size of unECAPed specimen was 10 {mu}m, this value decreased to 300 nm after 14 passes. At the beginning, while there was a banding tendency in the grains toward deformation direction, homogeneous and equiaxed grains were formed with increasing pass number. This grain refinement was as a result of an interaction between shear strain and thermal recovery during ECAP processing. Hardness measurements showed that the hardness values increased up to 4 passes, decreased effectively at 6th pass, again increased at 8th pass and after this pass, the hardness again decreased due to dynamic recrystallization.« less
Klein, Britt; Meyer, Denny; Austin, David William; Abbott, Jo-Anne M
2015-01-01
Background Internet-based assessment has the potential to assist with the diagnosis of mental health disorders and overcome the barriers associated with traditional services (eg, cost, stigma, distance). Further to existing online screening programs available, there is an opportunity to deliver more comprehensive and accurate diagnostic tools to supplement the assessment and treatment of mental health disorders. Objective The aim was to evaluate the diagnostic criterion validity and test-retest reliability of the electronic Psychological Assessment System (e-PASS), an online, self-report, multidisorder, clinical assessment and referral system. Methods Participants were 616 adults residing in Australia, recruited online, and representing prospective e-PASS users. Following e-PASS completion, 158 participants underwent a telephone-administered structured clinical interview and 39 participants repeated the e-PASS within 25 days of initial completion. Results With structured clinical interview results serving as the gold standard, diagnostic agreement with the e-PASS varied considerably from fair (eg, generalized anxiety disorder: κ=.37) to strong (eg, panic disorder: κ=.62). Although the e-PASS’ sensitivity also varied (0.43-0.86) the specificity was generally high (0.68-1.00). The e-PASS sensitivity generally improved when reducing the e-PASS threshold to a subclinical result. Test-retest reliability ranged from moderate (eg, specific phobia: κ=.54) to substantial (eg, bulimia nervosa: κ=.87). Conclusions The e-PASS produces reliable diagnostic results and performs generally well in excluding mental disorders, although at the expense of sensitivity. For screening purposes, the e-PASS subclinical result generally appears better than a clinical result as a diagnostic indicator. Further development and evaluation is needed to support the use of online diagnostic assessment programs for mental disorders. Trial Registration Australian and New Zealand Clinical Trials Registry ACTRN121611000704998; http://www.anzctr.org.au/trial_view.aspx?ID=336143 (Archived by WebCite at http://www.webcitation.org/618r3wvOG). PMID:26392066
Superconducting magnetic shielding apparatus and method
Clem, John R.; Clem, John R.
1983-01-01
Disclosed is a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped.
Superconducting magnetic shielding apparatus and method
Clem, J.R.; Clem, J.R.
1983-10-11
Disclosed are a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped. 5 figs.
Superconducting magnetic shielding apparatus and method
Clem, J.R.
1982-07-09
Disclosed is a method and apparatus for providing magnetic shielding around a working volume. The apparatus includes a hollow elongated superconducting shell or cylinder having an elongated low magnetic pinning central portion, and two high magnetic pinning end regions. Transition portions of varying magnetic pinning properties are interposed between the central and end portions. The apparatus further includes a solenoid substantially coextensive with and overlying the superconducting cylinder, so as to be magnetically coupled therewith. The method includes the steps passing a longitudinally directed current through the superconducting cylinder so as to depin magnetic reservoirs trapped in the cylinder. Next, a circumferentially directed current is passed through the cylinder, while a longitudinally directed current is maintained. Depinned magnetic reservoirs are moved to the end portions of the cylinder, where they are trapped.
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented.
Steiner, Silvan
2018-01-01
The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented. PMID:29623057
Users manual for the Chameleon parallel programming tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.; Smith, B.
1993-06-01
Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highlymore » portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.« less
Purge gas protected transportable pressurized fuel cell modules and their operation in a power plant
Zafred, P.R.; Dederer, J.T.; Gillett, J.E.; Basel, R.A.; Antenucci, A.B.
1996-11-12
A fuel cell generator apparatus and method of its operation involves: passing pressurized oxidant gas and pressurized fuel gas into modules containing fuel cells, where the modules are each enclosed by a module housing surrounded by an axially elongated pressure vessel, and where there is a purge gas volume between the module housing and pressure vessel; passing pressurized purge gas through the purge gas volume to dilute any unreacted fuel gas from the modules; and passing exhaust gas and circulated purge gas and any unreacted fuel gas out of the pressure vessel; where the fuel cell generator apparatus is transportable when the pressure vessel is horizontally disposed, providing a low center of gravity. 11 figs.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.
Code of Federal Regulations, 2011 CFR
2011-07-01
... linear regression parameters (slope, intercept, and correlation coefficient) describing the relationship... correlation coefficient. (2) To pass the test for comparability, the slope, intercept, and correlation...
METHOD OF MAKING SPHERICAL ACTINIDE CARBIDE
White, G.D.; O'Rourke, D.C.
1962-12-25
This patent describes a method of making uniform, spherical, nonpyrophoric UC. UO/sub 2/ and carbon are mixed in stoichiometric proportions and passed through a plasma flame of inert gas at 10,000 to 13,000 deg C. (AEC)
Method of texturing a superconductive oxide precursor
DeMoranville, Kenneth L.; Li, Qi; Antaya, Peter D.; Christopherson, Craig J.; Riley, Jr., Gilbert N.; Seuntjens, Jeffrey M.
1999-01-01
A method of forming a textured superconductor wire includes constraining an elongated superconductor precursor between two constraining elongated members placed in contact therewith on opposite sides of the superconductor precursor, and passing the superconductor precursor with the two constraining members through flat rolls to form the textured superconductor wire. The method includes selecting desired cross-sectional shape and size constraining members to control the width of the formed superconductor wire. A textured superconductor wire formed by the method of the invention has regular-shaped, curved sides and is free of flashing. A rolling assembly for single-pass rolling of the elongated precursor superconductor includes two rolls, two constraining members, and a fixture for feeding the precursor superconductor and the constraining members between the rolls. In alternate embodiments of the invention, the rolls can have machined regions which will contact only the elongated constraining members and affect the lateral deformation and movement of those members during the rolling process.
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, D; Spaans, J; Kumaraswamy, L
Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less
Nitrogen Dioxide Total Column Over Terra Nova Bay Station - Antarctica - During 2001
NASA Astrophysics Data System (ADS)
Bortoli, D.; Ravegnani, F.; Giovanelli, G.; Petritoli, A.; Kostadinov, I.
GASCOD (Gas Analyzer Spectrometer Correlating Optical Differences), installed at the Italian Antarctic Station of Terra Nova Bay (TNB) - 74.69S, 164.12E - since 1995, carried out a full dataset of zenith scattered light measurements for the year 2001. The application of DOAS methodology to the collected data gave as final results, the slant column values for nitrogen dioxide. The seasonal variation shows a maxi- mum in the summer and it is in good agreement with the results obtained by other authors. The data analysis is performed by using different parameters like the po- tential vorticity (PV) at 500 K and the atmospheric temperatures at the same level. After the verification of the linear dependency between the NO2 slant column values and the temperature of NO2 cross section utilized in the DOAS algorithm, the actual stratospheric temperatures (from ECMWF) over TNB are applied to the results. The sensible changes in the nitrogen dioxide slant column values allow to highlight the good matching between the NO2 AM/PM ratio and the potential vorticity at 500 K. The NO2 slant column values follow the variations of the stratospheric temperature mainly during the spring season, when the lowest temperatures are observed and the ozone-hole phenomena mainly occur. ACKNOWLEDGMENTS: The author Daniele Bortoli was financially supported by the "Subprograma Ciência e Tecnologia do Ter- ceiro Quadro Comunitário de Apoio". The National Program for Antarctic Research (PNRA) supported this research.
Chatrath, Jatin; Aziz, Mohsin; Helaoui, Mohamed
2018-01-01
Reconfigurable and multi-standard RF front-ends for wireless communication and sensor networks have gained importance as building blocks for the Internet of Things. Simpler and highly-efficient transmitter architectures, which can transmit better quality signals with reduced impairments, are an important step in this direction. In this regard, mixer-less transmitter architecture, namely, the three-way amplitude modulator-based transmitter, avoids the use of imperfect mixers and frequency up-converters, and their resulting distortions, leading to an improved signal quality. In this work, an augmented memory polynomial-based model for the behavioral modeling of such mixer-less transmitter architecture is proposed. Extensive simulations and measurements have been carried out in order to validate the accuracy of the proposed modeling strategy. The performance of the proposed model is evaluated using normalized mean square error (NMSE) for long-term evolution (LTE) signals. NMSE for a LTE signal of 1.4 MHz bandwidth with 100,000 samples for digital combining and analog combining are recorded as −36.41 dB and −36.9 dB, respectively. Similarly, for a 5 MHz signal the proposed models achieves −31.93 dB and −32.08 dB NMSE using digital and analog combining, respectively. For further validation of the proposed model, amplitude-to-amplitude (AM-AM), amplitude-to-phase (AM-PM), and the spectral response of the modeled and measured data are plotted, reasonably meeting the desired modeling criteria. PMID:29510501
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
A Wide Band Absorbing Material Design Using Band-Pass Frequency Selective Surface
NASA Astrophysics Data System (ADS)
Xu, Yonggang; Xu, Qiang; Liu, Ting; Zheng, Dianliang; Zhou, Li
2018-03-01
Based on the high frequency advantage characteristics of the Fe based absorbing coating, a method for designing the structure of broadband absorbing structure by using frequency selective surface (FSS) is proposed. According to the transmission and reflection characteristic of the different size FSS structure, the frequency variation characteristic was simulated. Secondly, the genetic algorithm was used to optimize the high frequency broadband absorbing materials, including the single and double magnetic layer material. Finally, the absorbing characteristics in iron layer were analyzed as the band pass FSS structure was embedded, the results showed that the band-pass FSS had the influence on widening the absorbing frequency. As the FSS was set as the bottom layer, it was effective to achieve the good absorbing property in low frequency and the high frequency absorbing performance was not weakened, because the band-pass FSS led the low frequency absorption and the high frequency shielding effect. The results of this paper are of guiding significance for designing and manufacturing the broadband absorbing materials.
NASA Astrophysics Data System (ADS)
Zhao, De; Wang, Wei; Li, Zhibin; Shan, Xiaonian; Sun, Xin
Bicycle facilities are quite common in China but there are not enough quantitative methods to evaluate the Level of Service (LOS) of bicycle roadways. The number of passing events, which considers the interactions between bicyclists, has been proved to be a proper indicator for evaluating bicycle LOS under the special traffic and roadway conditions in China. The primary objective of this study is to propose a model considering the delay effects of passing events and rider's overtaking motivation. Field data was collected on South Zhongshan Road and Huaihai Road in Nanjing city of China with 639 bicyclists investigated. Then a new mathematical model was built to evaluate those effects through probability and regression analyses. It was found that the delay effect of passing events and rider's overtaking motivation are significant influencing factors which cannot be omitted. Correlation test shows the fitted relationship is greater between the model prediction and field data comparing with the previous model.
2.1 meter (82 inch) Slip Ring By-Pass Project
NASA Astrophysics Data System (ADS)
Bryan, Corby B.
2006-12-01
2.1 meter (82 inch) Slip Ring By-Pass Project I will describe a project to bypass the old method of getting control communications above the rotation point of the McDonald Observatory 2.1 meter dome. The old method used slip rings that were implemented in the late 1930s. The new system uses wireless serial commands which allow the control lines to be taken off the slip rings, leaving only power and ground. I will describe how the concept was devised so the slip rings could be by-passed, what micro-controller system that was decided on and used, how the wireless units were set up and finally how the system was tested and put in place with only limited tasks to control. (I.E. the opening and closing of the shutters) We describe the advantages to making this upgrade and how it could benefit any telescope interested in upgrading its communication systems. This project was designed and tested in ten weeks during the McDonald Observatory REU and was supported under NSF AST-0243745. The system was designed so that it could be installed while running side by side with the current method of getting control to the above rotation point. The method is still in place being tested on the 2.1 meter telescope and will soon be fully implemented by the University of Texas McDonald Observatory OS staff.
Carpenter, Clay E.; Morrison, Stanley J.
2001-07-03
This invention is directed to a process for treating the flow of anaerobic groundwater through an aquifer with a primary treatment media, preferably iron, and then passing the treated groundwater through a second porous media though which an oxygenated gas is passed in order to oxygenate the dissolved primary treatment material and convert it into an insoluble material thereby removing the dissolved primary treatment material from the groundwater.
NASA Astrophysics Data System (ADS)
Wayand, Nicholas E.; Stimberis, John; Zagrodnik, Joseph P.; Mass, Clifford F.; Lundquist, Jessica D.
2016-09-01
Low-level cold air from eastern Washington often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. Correlations of phase between surface-based methods and observations were greatly improved (r2 from 0.45 to 0.66) and frozen precipitation biases reduced (+36% to -6% of accumulated snow water equivalent) by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill (r2 = 0.61) over both parent models (r2 = 0.42 and 0.55). These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Normanno, Nicola; Pinto, Carmine; Taddei, Gianluigi; Gambacorta, Marcello; Castiglione, Francesca; Barberis, Massimo; Clemente, Claudio; Marchetti, Antonio
2013-06-01
The Italian Association of Medical Oncology (AIOM) and the Italian Society of Pathology and Cytology organized an external quality assessment (EQA) scheme for EGFR mutation testing in non-small-cell lung cancer. Ten specimens, including three small biopsies with known epidermal growth factor receptor (EGFR) mutation status, were validated in three referral laboratories and provided to 47 participating centers. The participants were requested to perform mutational analysis, using their usual method, and to submit results within a 4-week time frame. According to a predefined scoring system, two points were assigned to correct genotype and zero points to false-negative or false-positive results. The threshold to pass the EQA was set at higher than 18 of 20 points. Two rounds were preplanned. All participating centers submitted the results within the time frame. Polymerase chain reaction (PCR)/sequencing was the main methodology used (n = 37 laboratories), although a few centers did use pyrosequencing (n = 8) or real-time PCR (n = 2). A significant number of analytical errors were observed (n = 20), with a high frequency of false-positive results (n = 16). The lower scores were obtained for the small biopsies. Fourteen of 47 centers (30%) that did not pass the first round, having a score less than or equal to 18 points, used PCR/sequencing, whereas 10 of 10 laboratories, using pyrosequencing or real-time PCR, passed the first round. Eight laboratories passed the second round. Overall, 41of 47 centers (87%) passed the EQA. The results of the EQA for EGFR testing in non-small-cell lung cancer suggest that good quality EGFR mutational analysis is performed in Italian laboratories, although differences between testing methods were observed, especially for small biopsies.
NASA Astrophysics Data System (ADS)
Lipinska, Marta; Chrominski, Witold; Olejnik, Lech; Golinski, Jacek; Rosochowski, Andrzej; Lewandowska, Malgorzata
2017-10-01
In this study, an Al-Mg-Si alloy was processed using via incremental equal channel angular pressing (I-ECAP) in order to obtain homogenous, ultrafine-grained plates with low anisotropy of the mechanical properties. This was the first attempt to process an Al-Mg-Si alloy using this technique. Samples in the form of 3 mm-thick square plates were subjected to I-ECAP with the 90 deg rotation around the axis normal to the surface of the plate between passes. Samples were investigated first in their initial state, then after a single pass of I-ECAP, and finally after four such passes. Analyses of the microstructure and mechanical properties demonstrated that the I-ECAP method can be successfully applied in Al-Mg-Si alloys. The average grain size decreased from 15 to 19 µm in the initial state to below 1 µm after four I-ECAP passes. The fraction of high-angle grain boundaries in the sample subjected to four I-ECAP passes lay within 53 to 57 pct depending on the examined plane. The mechanism of grain refinement in Al-Mg-Si alloy was found to be distinctly different from that in pure aluminum with the grain rotation being more prominent than the grain subdivision, which was attributed to lower stacking fault energy and the reduced mobility of dislocations in the alloy. The ultimate tensile strength increased more than twice, whereas the yield strength was more than threefold. Additionally, the plates processed by I-ECAP exhibited low anisotropy of mechanical properties (in plane and across the thickness) in comparison to other SPD processing methods, which makes them attractive for further processing and applications.
Michallek, Florian; Dewey, Marc
2017-04-01
To introduce a novel hypothesis and method to characterise pathomechanisms underlying myocardial ischemia in chronic ischemic heart disease by local fractal analysis (FA) of the ischemic myocardial transition region in perfusion imaging. Vascular mechanisms to compensate ischemia are regulated at various vascular scales with their superimposed perfusion pattern being hypothetically self-similar. Dedicated FA software ("FraktalWandler") has been developed. Fractal dimensions during first-pass (FD first-pass ) and recirculation (FD recirculation ) are hypothesised to indicate the predominating pathomechanism and ischemic severity, respectively. Twenty-six patients with evidence of myocardial ischemia in 108 ischemic myocardial segments on magnetic resonance imaging (MRI) were analysed. The 40th and 60th percentiles of FD first-pass were used for pathomechanical classification, assigning lesions with FD first-pass ≤ 2.335 to predominating coronary microvascular dysfunction (CMD) and ≥2.387 to predominating coronary artery disease (CAD). Optimal classification point in ROC analysis was FD first-pass = 2.358. FD recirculation correlated moderately with per cent diameter stenosis in invasive coronary angiography in lesions classified CAD (r = 0.472, p = 0.001) but not CMD (r = 0.082, p = 0.600). The ischemic transition region may provide information on pathomechanical composition and severity of myocardial ischemia. FA of this region is feasible and may improve diagnosis compared to traditional noninvasive myocardial perfusion analysis. • A novel hypothesis and method is introduced to pathophysiologically characterise myocardial ischemia. • The ischemic transition region appears a meaningful diagnostic target in perfusion imaging. • Fractal analysis may characterise pathomechanical composition and severity of myocardial ischemia.
Lanzarotta, Adam; Lorenz, Lisa; Batson, JaCinta S; Flurer, Cheryl
2017-11-30
A simple, fast, sensitive and effective pass/fail field-friendly method has been developed for detecting sildenafil in suspect Viagra and unapproved tablets using handheld Raman spectrometers and silver colloids. The method involves dissolving a portion of a tablet in water followed by filtration and addition of silver colloids, resulting in a solution that can be measured directly through a glass vial. Over one hundred counterfeit Viagra and unapproved tablets were examined on three different devices during the method development phase of the study. While the pass/fail approach was found to be 92.6% effective on average, the efficacy increased to 97.4% on average when coupled with the software's "Discover Mode" feature that allows the user to compare a suspect spectrum to that of a stored sildenafil spectrum. The lowest concentration of sildenafil in a water/colloid solution that yielded a "Pass" was found to be 7.6μg/mL or 7.6 parts per million (ppm). For the analysis of suspect tablets, this value was found to be as low as 10μg/mL and as high as 625μg/mL. This variability was likely related to the tablet formulation, e.g., concentration of sildenafil, presence and concentration of water-soluble and/or water-insoluble ingredients. However, since most counterfeit Viagra and unapproved tablets contain >50mg sildenafil per tablet, such low concentrations will not be encountered often. Limited in-lab and in-field validation studies were conducted in which analysts/field agents followed the procedure outlined in this study for small sample sets. These individuals were provided with written instructions, a ∼20min demonstration regarding how to perform the procedure and use the instrument, and a kit with field-friendly supplies (purified bottled water from a local grocery store, disposable plastic pipettes, eye-dropper with a silver colloid solution, etc.). The method proved to be 98.3% and 91.7% effective for the in-lab and in-field validation studies, respectively, which demonstrated the ruggedness, simplicity and practicality of the method. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Li, Liyang; Wang, Jun; Feng, Mingde; Ma, Hua; Wang, Jiafu; Du, Hongliang; Qu, Shaobo
In this paper, we demonstrate a method of designing all-dielectric metamaterial frequency selective surface (FSS) with ceramic resonators in spatial arrangement. Compared with the traditional way, spatial arrangement provides a flexible way to handle the permutation and combination of different ceramic resonators. With this method, the resonance response can be adjusted easily to achieve pass/stop band effects. As an example, a stop band spatial arrangement all-dielectric metamaterial FSS is designed. Its working band is in 11.65-12.23GHz. By adjusting permittivity and geometrical parameters of ceramic resonators, we can easily modulate the resonances, band pass or band stop characteristic, as well as the working band.
Procedures, considerations for welding X-80 line pipe established
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillenbrand, H.G.; Niederhoff, K.A.; Hauck, G.
1997-09-15
The possibility of manufacturing and laying high-strength Grade X-80 (GRS 550) linepipe has been proven in large projects that have already been implemented. Two welding methods for pipeline construction are well established: manual deposition of root and hot passes with cellulosic electrodes and of filler and cap passes with basic vertical-down electrodes (combined-electrode welding) and mechanized-gas-metal arc welding (GMAW). This is also true for the welding consumables, which have been well-tuned to match the pipe material in strength. The pipe material is suitable for unrestricted use in onshore and offshore applications. The paper discusses higher grades, composition, weldability, welding methods,more » and economics.« less
CNN for breaking text-based CAPTCHA with noise
NASA Astrophysics Data System (ADS)
Liu, Kaixuan; Zhang, Rong; Qing, Ke
2017-07-01
A CAPTCHA ("Completely Automated Public Turing test to tell Computers and Human Apart") system is a program that most humans can pass but current computer programs could hardly pass. As the most common type of CAPTCHAs , text-based CAPTCHA has been widely used in different websites to defense network bots. In order to breaking textbased CAPTCHA, in this paper, two trained CNN models are connected for the segmentation and classification of CAPTCHA images. Then base on these two models, we apply sliding window segmentation and voting classification methods realize an end-to-end CAPTCHA breaking system with high success rate. The experiment results show that our method is robust and effective in breaking text-based CAPTCHA with noise.
NASA Astrophysics Data System (ADS)
Shang, Yanliang; Shi, Wenjun; Han, Tongyin; Qin, Zhichao; Du, Shouji
2017-10-01
The shield method has many advantages in the construction of urban subway, and has become the preferred method for the construction of urban subway tunnel. Taking Shijiazhuang metro line 3 (administrative center station - garden park station interval) Passing alongside bridge as the engineering background, double shield crossing the bridge pile foundation model was set up. The deformation and internal force of the pile foundation during the construction of the shield were analyzed. Pile stress caused by shield construction increases, but the maximum stress is less than the design strength; the maximum surface settlement caused by the construction of 10.2 mm, the results meet the requirements of construction.
A method for reducing sampling jitter in digital control systems
NASA Technical Reports Server (NTRS)
Anderson, T. O.; HURBD W. J.; Hurd, W. J.
1969-01-01
Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.
Advancing Explosives Detection Capabilities: Vapor Detection
Atkinson, David
2018-05-11
A new, PNNL-developed method provides direct, real-time detection of trace amounts of explosives such as RDX, PETN and C-4. The method selectively ionizes a sample before passing the sample through a mass spectrometer to detect explosive vapors. The method could be used at airports to improve aviation security.
Advancing Explosives Detection Capabilities: Vapor Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atkinson, David
2012-10-15
A new, PNNL-developed method provides direct, real-time detection of trace amounts of explosives such as RDX, PETN and C-4. The method selectively ionizes a sample before passing the sample through a mass spectrometer to detect explosive vapors. The method could be used at airports to improve aviation security.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Vetter, Jeffrey S.
2005-02-01
The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.
Method for observing phase objects without halos and directional shadows
NASA Astrophysics Data System (ADS)
Suzuki, Yoshimasa; Kajitani, Kazuo; Ohde, Hisashi
2015-03-01
A new microscopy method for observing phase objects without halos and directional shadows is proposed. The key optical element is an annular aperture at the front focal plane of a condenser with a larger diameter than those used in standard phase contrast microscopy. The light flux passing through the annular aperture is changed by the specimen's surface profile and then passes through an objective and contributes to image formation. This paper presents essential conditions for realizing the method. In this paper, images of colonies formed by induced pluripotent stem (iPS) cells using this method are compared with the conventional phase contrast method and the bright-field method when the NA of the illumination is small to identify differences among these techniques. The outlines of the iPS cells are clearly visible with this method, whereas they are not clearly visible due to halos when using the phase contrast method or due to weak contrast when using the bright-field method. Other images using this method are also presented to demonstrate a capacity of this method: a mouse ovum and superimposition of several different images of mouse iPS cells.
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm
2015-01-01
This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168
Backus, S.; Kapteyn, H.C.; Murnane, M.M.
1997-07-01
Laser amplifiers and methods for amplifying a laser beam are disclosed. A representative embodiment of the amplifier comprises first and second curved mirrors, a gain medium, a third mirror, and a mask. The gain medium is situated between the first and second curved mirrors at the focal point of each curved mirror. The first curved mirror directs and focuses a laser beam to pass through the gain medium to the second curved mirror which reflects and recollimates the laser beam. The gain medium amplifies and shapes the laser beam as the laser beam passes therethrough. The third mirror reflects the laser beam, reflected from the second curved mirror, so that the laser beam bypasses the gain medium and return to the first curved mirror, thereby completing a cycle of a ring traversed by the laser beam. The mask defines at least one beam-clipping aperture through which the laser beam passes during a cycle. The gain medium is pumped, preferably using a suitable pumping laser. The laser amplifier can be used to increase the energy of continuous-wave or, especially, pulsed laser beams including pulses of femtosecond duration and relatively high pulse rate. 7 figs.
Backus, Sterling; Kapteyn, Henry C.; Murnane, Margaret M.
1997-01-01
Laser amplifiers and methods for amplifying a laser beam are disclosed. A representative embodiment of the amplifier comprises first and second curved mirrors, a gain medium, a third mirror, and a mask. The gain medium is situated between the first and second curved mirrors at the focal point of each curved mirror. The first curved mirror directs and focuses a laser beam to pass through the gain medium to the second curved mirror which reflects and recollimates the laser beam. The gain medium amplifies and shapes the laser beam as the laser beam passes therethough. The third mirror reflects the laser beam, reflected from the second curved mirror, so that the laser beam bypasses the gain medium and return to the first curved mirror, thereby completing a cycle of a ring traversed by the laser beam. The mask defines at least one beam-clipping aperture through which the laser beam passes during a cycle. The gain medium is pumped, preferably using a suitable pumping laser. The laser amplifier can be used to increase the energy of continuous-wave or, especially, pulsed laser beams including pulses of femtosecond duration and relatively high pulse rate.
Saleh Ardestani, Abbas; Sarabi Asiabar, Ali; Ebadifard Azar, Farbod; Abtahi, Seyyed Ali
2016-01-01
Background: Effective leadership that rises from managerial training courses is highly constructive in managing hospitals more effectively. This study aims at investigating the relationship between leadership effectiveness with providing management training courses for hospital managers. Methods: This was a cross-sectional study carried out on top and middle managers of 16 hospitals of Iran University of Medical Sciences. As a sample, 96 participants were selected through census method. Data were collected using leadership effectiveness and style questionnaire, whose validity and reliability were certified in previous studies. Pearson correlation coefficient and linear regressions were used for data analysis. Results: Leadership effectiveness score was estimated to be 4.36, showing a suitable status for managers' leadership effectiveness compared to the set criteria. No significant difference was found between leadership effectiveness and styles among managers who had passed the training courses with those who had not (p>0.05). Conclusion: Passing managerial training courses may have no significant effect on managers' leadership effectiveness, but there may be some other variables which should be meticulously studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X
Purpose: To explore the real-time dose verification method in volumetric modulated arc radiotherapy (VMAT) with a 2D array ion chamber array. Methods: The 2D ion chamber array was fixed on the panel of electronic portal imaging device (EPID). Source-detector distance (SDD)was 140cm. 8mm RW3 solid water was added to the detector panel to achieve maximum readings.The patient plans for esophageal, prostate and liver cancers were selected to deliver on the cylindrical Cheese phantom 5 times in order to validate the reproducibility of doses. Real-time patient transit dose measurements were performed at each fraction. Dose distributions wereevaluated using gamma index criteriamore » of 3mm DTA and 3% dose difference referred to the firsttime Result. Results: The gamma index pass rate in the Cheese phantom were about 98%; The gamma index pass rate for esophageal, liver and prostate cancer patient were about 92%,94%, and 92%, respectively; Gamma pass rate for all single fraction were more than 90%. Conclusion: The 2D array is capable of monitoring the real time transit doses during VMAT delivery. It is helpful to improve the treatment accuracy.« less
Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Budiman, M. A.; Aulya, L.
2018-02-01
Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.
Passing Messages between Biological Networks to Refine Predicted Interactions
Glass, Kimberly; Huttenhower, Curtis; Quackenbush, John; Yuan, Guo-Cheng
2013-01-01
Regulatory network reconstruction is a fundamental problem in computational biology. There are significant limitations to such reconstruction using individual datasets, and increasingly people attempt to construct networks using multiple, independent datasets obtained from complementary sources, but methods for this integration are lacking. We developed PANDA (Passing Attributes between Networks for Data Assimilation), a message-passing model using multiple sources of information to predict regulatory relationships, and used it to integrate protein-protein interaction, gene expression, and sequence motif data to reconstruct genome-wide, condition-specific regulatory networks in yeast as a model. The resulting networks were not only more accurate than those produced using individual data sets and other existing methods, but they also captured information regarding specific biological mechanisms and pathways that were missed using other methodologies. PANDA is scalable to higher eukaryotes, applicable to specific tissue or cell type data and conceptually generalizable to include a variety of regulatory, interaction, expression, and other genome-scale data. An implementation of the PANDA algorithm is available at www.sourceforge.net/projects/panda-net. PMID:23741402
Saleh Ardestani, Abbas; Sarabi Asiabar, Ali; Ebadifard Azar, Farbod; Abtahi, Seyyed Ali
2016-01-01
Background: Effective leadership that rises from managerial training courses is highly constructive in managing hospitals more effectively. This study aims at investigating the relationship between leadership effectiveness with providing management training courses for hospital managers. Methods: This was a cross-sectional study carried out on top and middle managers of 16 hospitals of Iran University of Medical Sciences. As a sample, 96 participants were selected through census method. Data were collected using leadership effectiveness and style questionnaire, whose validity and reliability were certified in previous studies. Pearson correlation coefficient and linear regressions were used for data analysis. Results: Leadership effectiveness score was estimated to be 4.36, showing a suitable status for managers' leadership effectiveness compared to the set criteria. No significant difference was found between leadership effectiveness and styles among managers who had passed the training courses with those who had not (p>0.05). Conclusion: Passing managerial training courses may have no significant effect on managers' leadership effectiveness, but there may be some other variables which should be meticulously studied. PMID:28491840
Transpiration Cooling Experiment
NASA Technical Reports Server (NTRS)
Song, Kyo D.; Ries, Heidi R.; Scotti, Stephen J.; Choi, Sang H.
1997-01-01
The transpiration cooling method was considered for a scram-jet engine to accommodate thermally the situation where a very high heat flux (200 Btu/sq. ft sec) from hydrogen fuel combustion process is imposed to the engine walls. In a scram-jet engine, a small portion of hydrogen fuel passes through the porous walls of the engine combustor to cool the engine walls and at the same time the rest passes along combustion chamber walls and is preheated. Such a regenerative system promises simultaneously cooling of engine combustor and preheating the cryogenic fuel. In the experiment, an optical heating method was used to provide a heat flux of 200 Btu/sq. ft sec to the cylindrical surface of a porous stainless steel specimen which carried helium gas. The cooling efficiencies by transpiration were studied for specimens with various porosity. The experiments of various test specimens under high heat flux have revealed a phenomenon that chokes the medium flow when passing through a porous structure. This research includes the analysis of the system and a scaling conversion study that interprets the results from helium into the case when hydrogen medium is used.
Selection of optimal welding condition for GTA pulse welding in root-pass of V-groove butt joint
NASA Astrophysics Data System (ADS)
Yun, Seok-Chul; Kim, Jae-Woong
2010-12-01
In the manufacture of high-quality welds or pipeline, a full-penetration weld has to be made along the weld joint. Therefore, root-pass welding is very important, and its conditions have to be selected carefully. In this study, an experimental method for the selection of optimal welding conditions is proposed for gas tungsten arc (GTA) pulse welding in the root pass which is done along the V-grooved butt-weld joint. This method uses response surface analysis in which the width and height of back bead are chosen as quality variables of the weld. The overall desirability function, which is the combined desirability function for the two quality variables, is used as the objective function to obtain the optimal welding conditions. In our experiments, the target values of back bead width and height are 4 mm and zero, respectively, for a V-grooved butt-weld joint of a 7-mm-thick steel plate. The optimal welding conditions could determine the back bead profile (bead width and height) as 4.012 mm and 0.02 mm. From a series of welding tests, it was revealed that a uniform and full-penetration weld bead can be obtained by adopting the optimal welding conditions determined according to the proposed method.
Oud, Lavi; Watkins, Phillip
2015-01-01
Background Infections are a well-known complication of pregnancy. However, pregnancy-associated severe sepsis (PASS) has not been as well-characterized, with limited population-level data reported to date. We performed a population-based study of the evolving patterns of the epidemiology, clinical characteristics, resource utilization, and outcomes of PASS in Texas over the past decade. Methods The Texas Inpatient Public Use Data File was used to identify pregnancy-associated hospitalizations and PASS hospitalizations for the years 2001 - 2010. The Texas Center for Health Statistics reports of live births, abortions and fetal deaths, and a previously reported population-based, age-specific linkage study on miscarriage were used to derive the annual total estimated pregnancies (TEPs). The incidence, demographics, clinical characteristics, resource utilization and outcomes of PASS were examined. Logistic regression modeling was used to explore the predictors of PASS and its associated mortality. Results There were 4,060,201 pregnancy-associated hospitalizations and 1,007 PASS hospitalizations during study period. The incidence of PASS was increased by 236% over the past decade, rising from 11 to 26 hospitalizations per 100,000 TEPs. The key changes between 2001 - 2002 and 2009 - 2010 within PASS hospitalizations included: admission to ICU 78% vs. 90% (P = 0.002); development of ≥ 3 organ failures 9% vs. 35% (P < 0.0001); and inflation-adjusted median hospital charges (2,010 dollars) $64,034 vs. $89,895 (P = 0.0141). Hospital mortality (11%) remained unchanged during study period. Chronic liver disease (adjusted odds ratio (aOR) 41.4) and congestive heart failure (CHF) (aOR 20.5) were associated with the highest risk of PASS, in addition to black race, poverty, drug abuse, and lack of health insurance. The highest risk of death was among women with HIV infection (aOR 45.5), need for mechanical ventilation (aOR 4.5), drug abuse (aOR 3.0), and lacking health insurance (aOR 2.9). Conclusions The incidence, severity, and fiscal burden of PASS rose substantially over the past decade. Case fatality was lower than that for severe sepsis in the general population. Chronic liver disease and CHF pose especially high risk of PASS. Pregnant women with history of drug abuse and lacking health insurance are at high risk of both developing and dying with PASS, requiring extra vigilance for early diagnosis and targeted intervention. PMID:25883702
Systems and methods for generation of hydrogen peroxide vapor
Love, Adam H; Eckels, Joel Del; Vu, Alexander K; Alcaraz, Armando; Reynolds, John G
2014-12-02
A system according to one embodiment includes a moisture trap for drying air; at least one of a first container and a second container; and a mechanism for at least one of: bubbling dried air from the moisture trap through a hydrogen peroxide solution in the first container for producing a hydrogen peroxide vapor, and passing dried air from the moisture trap into a headspace above a hydrogen peroxide solution in the second container for producing a hydrogen peroxide vapor. A method according one embodiment includes at least one of bubbling dried air through a hydrogen peroxide solution in a container for producing a first hydrogen peroxide vapor, and passing dried air from the moisture trap into a headspace above the hydrogen peroxide solution in a container for producing a second hydrogen peroxide vapor. Additional systems and methods are also presented.
A method of calculating the ultimate strength of continuous beams
NASA Technical Reports Server (NTRS)
Newlin, J A; Trayer, George W
1931-01-01
The purpose of this study was to investigate the strength of continuous beams after the elastic limit has been passed. As a result, a method of calculation, which is applicable to maximum load conditions, has been developed. The method is simpler than the methods now in use and it applies properly to conditions where the present methods fail to apply.
Purge gas protected transportable pressurized fuel cell modules and their operation in a power plant
Zafred, Paolo R.; Dederer, Jeffrey T.; Gillett, James E.; Basel, Richard A.; Antenucci, Annette B.
1996-01-01
A fuel cell generator apparatus and method of its operation involves: passing pressurized oxidant gas, (O) and pressurized fuel gas, (F), into fuel cell modules, (10 and 12), containing fuel cells, where the modules are each enclosed by a module housing (18), surrounded by an axially elongated pressure vessel (64), where there is a purge gas volume, (62), between the module housing and pressure vessel; passing pressurized purge gas, (P), through the purge gas volume, (62), to dilute any unreacted fuel gas from the modules; and passing exhaust gas, (82), and circulated purge gas and any unreacted fuel gas out of the pressure vessel; where the fuel cell generator apparatus is transpatable when the pressure vessel (64) is horizontally disposed, providing a low center of gravity.
Aluminum U-groove weld enhancement based on experimental stress analyses
NASA Technical Reports Server (NTRS)
Verderaime, V.; Vaughan, R.
1995-01-01
Though butt-welds are among the most preferred joining methods in aerostructures because of their sealing and assembly integrity and general elastic performance; their inelastic mechanics are generally the least understood. This study investigated experimental strain distributions across a thick aluminum U-grooved weld and identified two weld process considerations for improving the multipass weld strength. The extreme thermal expansion and contraction gradient of the fusion heat input across the tab thickness between the grooves produce severe peaking, which induces bending moment under uniaxial loading. The filler strain hardening decreased with increasing filler pass sequence. These combined effects reduce the weld strength, and a depeaking index model was developed to select filler pass thicknesses, pass numbers, and sequences to improve the welding process results over the current normal weld schedule.
The flight test of Pi-SAR(L) for the repeat-pass interferometric SAR
NASA Astrophysics Data System (ADS)
Nohmi, Hitoshi; Shimada, Masanobu; Miyawaki, Masanori
2006-09-01
This paper describes the experiment of the repeat pass interferometric SAR using Pi-SAR(L). The air-borne repeat-pass interferometric SAR is expected as an effective method to detect landslide or predict a volcano eruption. To obtain a high-quality interferometric image, it is necessary to make two flights on the same flight pass. In addition, since the antenna of the Pi-SAR(L) is secured to the aircraft, it is necessary to fly at the same drift angle to keep the observation direction same. We built a flight control system using an auto pilot which has been installed in the airplane. This navigation system measures position and altitude precisely with using a differential GPS, and the PC Navigator outputs a difference from the desired course to the auto pilot. Since the air density is thinner and the speed is higher than the landing situation, the gain of the control system is required to be adjusted during the repeat pass flight. The observation direction could be controlled to some extent by adjusting a drift angle with using a flight speed control. The repeat-pass flight was conducted in Japan for three days in late November. The flight was stable and the deviation was within a few meters for both horizontal and vertical direction even in the gusty condition. The SAR data were processed in time domain based on range Doppler algorism to make the complete motion compensation. Thus, the interferometric image processed after precise phase compensation is shown.
Factors Associated with First-Pass Success in Pediatric Intubation in the Emergency Department.
Goto, Tadahiro; Gibo, Koichiro; Hagiwara, Yusuke; Okubo, Masashi; Brown, David F M; Brown, Calvin A; Hasegawa, Kohei
2016-03-01
The objective of this study was to investigate the factors associated with first-pass success in pediatric intubation in the emergency department (ED). We analyzed the data from two multicenter prospective studies of ED intubation in 17 EDs between April 2010 and September 2014. The studies prospectively measured patient's age, sex, principal indication for intubation, methods (e.g., rapid sequence intubation [RSI]), devices, and intubator's level of training and specialty. To evaluate independent predictors of first-pass success, we fit logistic regression model with generalized estimating equations. In the sensitivity analysis, we repeated the analysis in children <10 years. A total of 293 children aged ≤18 years who underwent ED intubation were eligible for the analysis. The overall first-pass success rate was 60% (95%CI [54%-66%]). In the multivariable model, age ≥10 years (adjusted odds ratio [aOR], 2.45; 95% CI [1.23-4.87]), use of RSI (aOR, 2.17; 95% CI [1.31-3.57]), and intubation attempt by an emergency physician (aOR, 3.21; 95% CI [1.78-5.83]) were significantly associated with a higher chance of first-pass success. Likewise, in the sensitivity analysis, the use of RSI (aOR, 3.05; 95% CI [1.63-5.70]), and intubation attempt by an emergency physician (aOR, 4.08; 95% CI [1.92-8.63]) were significantly associated with a higher chance of first-pass success. Based on two large multicenter prospective studies of ED airway management, we found that older age, use of RSI, and intubation by emergency physicians were the independent predictors of a higher chance of first-pass success in children. Our findings should facilitate investigations to develop optimal airway management strategies in critically-ill children in the ED.
Vilaseca, Meritxell; Arjona, Montserrat; Pujol, Jaume; Peris, Elvira; Martínez, Vanessa
2013-01-01
AIM To evaluate the accuracy of spherical equivalent (SE) estimates of a double-pass system and to compare it with retinoscopy, subjective refraction and a table-mounted autorefractor. METHODS Non-cycloplegic refraction was performed on 125 eyes of 65 healthy adults (age 23.5±3.0 years) from October 2010 to January 2011 using retinoscopy, subjective refraction, autorefraction (Auto kerato-refractometer TOPCON KR-8100, Japan) and a double-pass system (Optical Quality Analysis System, OQAS, Visiometrics S.L., Spain). Nine consecutive measurements with the double-pass system were performed on a subgroup of 22 eyes to assess repeatability. To evaluate the trueness of the OQAS instrument, the SE laboratory bias between the double-pass system and the other techniques was calculated. RESULTS The SE mean coefficient of repeatability obtained was 0.22D. Significant correlations could be established between the OQAS and the SE obtained with retinoscopy (r=0.956, P<0.001), subjective refraction (r=0.955, P<0.001) and autorefraction (r=0.957, P<0.001). The differences in SE between the double-pass system and the other techniques were significant (P<0.001), but lacked clinical relevance except for retinoscopy; Retinoscopy gave more hyperopic values than the double-pass system -0.51±0.50D as well as the subjective refraction -0.23±0.50D; More myopic values were achieved by means of autorefraction 0.24±0.49D. CONCLUSION The double-pass system provides accurate and reliable estimates of the SE that can be used for clinical studies. This technique can determine the correct focus position to assess the ocular optical quality. However, it has a relatively small measuring range in comparison with autorefractors (-8.00 to +5.00D), and requires prior information on the refractive state of the patient. PMID:24195036
WE-D-BRA-06: IMRT QA with ArcCHECK: The MD Anderson Experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Suh, Y; Chi, P
Purpose: The objective of this project was to report our initial IMRT QA results and experience with the SunNuclear ArcCHECK. Methods: Three thousand one-hundred and sixteen cases were treated with IMRT or VMAT at our institution between October 2013 and September 2014. All IMRT/VMAT treatment plans underwent Quality Assurance (QA) using ArcCHECK prior to therapy. For clinical evaluation, a Gamma analysis is performed following QA delivery using the SNC Patient software (Sun Nuclear Corp) at the 3%/3mm level. QA Gamma pass rates were analyzed based on categories of treatment site, technique, and type of MLCs. Our current clinical threshold formore » passing a QA (Tclin) is set at a Gamma pass rate greater than 90%. We recorded the percent of failures for each category, as well as the Gamma pass rate threshold that would Result in 95% of QAs to pass (T95). Results: Using Tclin a failure rate of 5.9% over all QAs was observed. The highest failure rate was observed for gynecological (22%) and the lowest for CNS (0.9%) treatments. T95 was 91% over all QAs and ranged from 73% (gynecological) to 96.5% (CNS) for individual treatments sites. T95 was lower for IMRT and non-HD (high definition) MLCs at 88.5% and 94.5%, respectively, compared to 92.4% and 97.1% for VMAT and HD MLC treatments, respectively. There was a statistically significant difference between the passing rates for IMRT vs. VMAT and for HD MLCs vs. non-HD MLCs (p-values << 0.01). Gynecological, IMRT, and HD MLC treatments typically include more plans with larger field sizes. Conclusion: On average, Tclin with ArcCHECK was consistent with T95, as well as the 90% action level reported in TG-119. However, significant variations between the examined categories suggest a link between field size and QA passing rates and may warrant field size-specific passing rate thresholds.« less
Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.
Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-01-01
This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.
Apparatus and method for critical current measurements
Martin, Joe A.; Dye, Robert C.
1992-01-01
An apparatus for the measurement of the critical current of a superconductive sample, e.g., a clad superconductive sample, the apparatus including a conductive coil, a means for maintaining the coil in proximity to a superconductive sample, an electrical connection means for passing a low amplitude alternating current through the coil, a cooling means for maintaining the superconductive sample at a preselected temperature, a means for passing a current through the superconductive sample, and, a means for monitoring reactance of the coil, is disclosed, together with a process of measuring the critical current of a superconductive material, e.g., a clad superconductive material, by placing a superconductive material into the vicinity of the conductive coil of such an apparatus, cooling the superconductive material to a preselected temperature, passing a low amplitude alternating current through the coil, the alternating current capable of generating a magnetic field sufficient to penetrate, e.g., any cladding, and to induce eddy currents in the superconductive material, passing a steadily increasing current through the superconductive material, the current characterized as having a different frequency than the alternating current, and, monitoring the reactance of the coil with a phase sensitive detector as the current passed through the superconductive material is steadily increased whereby critical current of the superconductive material can be observed as the point whereat a component of impedance deviates.
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
SU-F-T-236: Comparison of Two IMRT/VMAT QA Systems Using Gamma Index Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dogan, N; Denissova, S
2016-06-15
Purpose: The goal of this study is to assess differences in the Gamma index pass rates when using two commercial QA systems and provide optimum Gamma index parameters for pre-treatment patient specific QA. Methods: Twenty-two VMAT cases that consisted of prostate, lung, head and neck, spine, brain and pancreas, were included in this study. The verification plans have been calculated using AcurosXB(V11) algorithm for different dose grids (1.5mm, 2.5mm, 3mm). The measurements were performed on TrueBeam(Varian) accelerator using both EPID(S1000) portal imager and ArcCheck(SunNuclearCorp) devices. Gamma index criteria variation of 3%/3mm, 2%/3mm, 2%/2mm and threshold (TH) doses of 5% tomore » 50% were used in analysis. Results: The differences in Gamma pass rates between two devices are not statistically significant for 3%/3mm, yielding pass rate higher than 95%. Increase of lower dose TH showed reduced pass rates for both devices. ArcCheck’s more pronounced effect can be attributed to higher contribution of lower dose region spread. As expected, tightening criteria to 2%/2mm (TH: 10%) decreased Gamma pass rates below 95%. Higher EPID (92%) pass rates compared to ArcCheck (86%) probably due to better spatial resolution. Portal Dosimetry results showed lower Gamma pass rates for composite plans compared to individual field pass rates. This may be due to the expansion in the analyzed region which includes pixels not included in the separate field analysis. Decreasing dose grid size from 2.5mm to 1.5mm did not show statistically significant (p<0.05) differences in Gamma pass rates for both QA devices. Conclusion: Overall, both system measurements agree well with calculated dose when using gamma index criteria of 3%/3mm for a variety of VMAT cases. Variability between two systems increases using different dose GRID, TH and tighter gamma criteria and must be carefully assessed prior to clinical use.« less
Gustavsson, Catharina; von Koch, Lena
2017-01-01
Background and objective In previous short-term and 2-year follow-ups, a pain and stress self-management group intervention (PASS) had better effect on pain-related disability, self-efficacy, catastrophizing, and perceived pain control than individually administered physiotherapy (IAPT) for patients with persistent tension-type neck pain. Studies that have evaluated long-term effects of self-management approaches toward persistent neck pain are sparse. The objective of this study was to compare pain-related disability, self-efficacy for activities of daily living (ADL), catastrophizing, pain, pain control, use of analgesics, and health care utilization in people with persistent tension-type neck pain 9 years after they received the PASS or IAPT. Materials and methods Of 156 people (PASS, n = 77; IAPT, n = 79) originally included in a randomized controlled trial, 129 people (PASS, n = 63; IAPT, n = 66) were eligible and were approached for the 9-year follow-up. They were sent a self-assessment questionnaire, comprising the Neck Disability Index, the Self-Efficacy Scale, the Coping Strategies Questionnaire, and questions regarding pain, analgesics, and health care utilization. Mixed linear models for repeated measures analysis or generalized estimating equations were used to evaluate the differences between groups and within groups over time (baseline, previous follow-ups, and 9-year follow-up) and the interaction effect of “time by group”. Results Ninety-four participants (73%) responded (PASS, n = 48; IAPT, n = 46). At 9 years, PASS participants reported less pain-related disability, pain at worst, and analgesics usage, and a trend toward better self-efficacy compared to IAPT participants. There was a difference between groups in terms of change over time for disability, self-efficacy for ADL, catastrophizing, perceived pain control, and health care visits in favor of PASS. Analyses of simple main effects at 9 years showed that the PASS group had less disability (p = 0.006) and a trend toward better self-efficacy (p = 0.059) than the IAPT group. Conclusion The favorable effects on pain-related disability of PASS were sustained 9 years after the intervention. PMID:28115865
Integration of Biomass Harvesting and Site Preparation
Bryce J. Stokes; William F. Watson
1986-01-01
This study was conducted to assess the costs of various site preparation methods with various levels of harvesting Site impacts, soil compaction and disturbance were examined. Three hawesting methods rare evaluated in pine pulpwood plantation and pine sawtimber stands. The harvesting methods tested were (1) conventional - harvesting all roundwood. (2) two-pass - first...
Method and apparatus for PM filter regeneration
Opris, Cornelius N [Peoria, IL; Verkiel, Maarten [Metamora, IL
2006-01-03
A method and apparatus for initiating regeneration of a particulate matter (PM) filter in an exhaust system in an internal combustion engine. The method and apparatus includes determining a change in pressure of exhaust gases passing through the PM filter, and responsively varying an opening of an intake valve in fluid communication with a combustion chamber.
Filter desulfation system and method
Lowe, Michael D.; Robel, Wade J.; Verkiel, Maarten; Driscoll, James J.
2010-08-10
A method of removing sulfur from a filter system of an engine includes continuously passing an exhaust flow through a desulfation leg of the filter system during desulfation. The method also includes sensing at least one characteristic of the exhaust flow and modifying a flow rate of the exhaust flow during desulfation in response to the sensing.
Method for monitoring stack gases for uranium activity
Beverly, C.R.; Ernstberger, E.G.
1985-07-03
A method for monitoring the stack gases of a purge cascade of gaseous diffusion plant for uranium activity. A sample stream is taken from the stack gases and contacted with a volume of moisture-laden air for converting trace levels of uranium hexafluoride, if any, in the stack gases into particulate uranyl fluoride. A continuous strip of filter paper from a supply roll is passed through this sampling stream to intercept and gather any uranyl fluoride in the sampling stream. This filter paper is then passed by an alpha scintillation counting device where any radioactivity on the filter paper is sensed so as to provide a continuous monitoring of the gas stream for activity indicative of the uranium content in the stack gases. 1 fig.
Method for monitoring stack gases for uranium activity
Beverly, Claude R.; Ernstberger, Harold G.
1988-01-01
A method for monitoring the stack gases of a purge cascade of a gaseous diffusion plant for uranium activity. A sample stream is taken from the stack gases and contacted with a volume of moisture-laden air for converting trace levels of uranium hexafluoride, if any, in the stack gases into particulate uranyl fluoride. A continuous strip of filter paper from a supply roll is passed through this sampling stream to intercept and gather any uranyl fluoride in the sampling stream. This filter paper is then passed by an alpha scintillation counting device where any radioactivity on the filter paper is sensed so as to provide a continuous monitoring of the gas stream for activity indicative of the uranium content in the stack gases.
Maruyama, Takashi; Miyamoto, Akira
2017-12-25
In treating non-stenting zones (NSZs), such as the common femoral artery (CFA) and popliteal artery (PA), the best method to treat severely calcified NSZ lesions remains controversial. Here we describe a new method for the treatment of severely calcified PA and CFA lesions using the Crosser® system (CS). After the first wire passed the lesion, the CS was passed through the other wire to create new cracks and lumens (NCAL) in both cases. After creating NCAL around the lumen of the first wire, a large scoring balloon was inflated to crush the severe calcification like a "GLASS CUT" with a glass knife.
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
Method and apparatus for generating coherent near 14 and near 16 micron radiation
Krupke, William F.
1977-01-01
A method and apparatus for producing coherent radiation in CO.sub.2 vibrational-rotational transitions at wavelengths near 14 and 16 microns. This is accomplished by passing a mixture of N.sub.2 and Ar through a glow discharge producing a high vibrational temperature in the N.sub.2, passing the excited N.sub.2 through a nozzle bank creating a supersonic flow thereof, injecting the CO.sub.2 in the supersonic flow creating a population inversion in the CO.sub.2, and directing the saturating pulse of radiation near 10.6 or 9.6 microns into the excited CO.sub.2 creating a population inversion producing coherent radiation at 14 or 16 microns, respectively.
Melchiors, Jacob; Petersen, K; Todsen, T; Bohr, A; Konge, Lars; von Buchwald, Christian
2018-06-01
The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass-fail level for proficiency. Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach's α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass-fail level was determined using the contrasting groups' standard setting method. Internal consistency was strong with a Cronbach's α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson's r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass-fail level in the test population resulted in half of the intermediary group receiving a failing score. We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are comparable to other studies exploring clinical assessment tools. The high rate of physicians underperforming in the intermediary group demonstrates the need for continued educational intervention. Based on our work, we recommend the use of the FLEXPAT in clinical assessment of FPL and the application of a pass-fail level of 72% for proficiency.
NASA Astrophysics Data System (ADS)
Lee, Juhwa; Hwang, Jeongho; Bae, Dongho
2018-03-01
In this paper, welding residual stress analysis and fatigue strength assessment were performed at elevated temperature for multi-pass dissimilar material weld between Alloy 617 and P92 steel, which are used in thermal power plant. Multi-pass welding between Alloy 617 and P92 steel was performed under optimized welding condition determined from repeated pre-test welding. In particular, for improving dissimilar material weld-ability, the buttering welding technique was applied on the P92 steel side before multi-pass welding. Welding residual stress distribution at the dissimilar material weld joint was numerically analyzed by using the finite element method, and compared with experimental results which were obtained by the hole-drilling method. Additionally, fatigue strength of dissimilar material weld joint was assessed at the room temperature (R.T), 300, 500, and 700 °C. In finite element analysis results, numerical peak values; longitudinal (410 MPa), transverse (345 MPa) were higher than those of experiments; longitudinal (298 MPa), transverse (245 MPa). There are quantitatively big differences between numerical and experimental results, due to some assumption about the thermal conductivity, specific heat, effects of enforced convection of the molten pool, dilution, and volume change during phase transformation caused by actual shield gas. The low fatigue limit at R.T, 300 °C, 500 °C and 700 °C was assessed to be 368, 276, 173 and 137 MPa respectively.
NASA Astrophysics Data System (ADS)
Lee, Juhwa; Hwang, Jeongho; Bae, Dongho
2018-07-01
In this paper, welding residual stress analysis and fatigue strength assessment were performed at elevated temperature for multi-pass dissimilar material weld between Alloy 617 and P92 steel, which are used in thermal power plant. Multi-pass welding between Alloy 617 and P92 steel was performed under optimized welding condition determined from repeated pre-test welding. In particular, for improving dissimilar material weld-ability, the buttering welding technique was applied on the P92 steel side before multi-pass welding. Welding residual stress distribution at the dissimilar material weld joint was numerically analyzed by using the finite element method, and compared with experimental results which were obtained by the hole-drilling method. Additionally, fatigue strength of dissimilar material weld joint was assessed at the room temperature (R.T), 300, 500, and 700 °C. In finite element analysis results, numerical peak values; longitudinal (410 MPa), transverse (345 MPa) were higher than those of experiments; longitudinal (298 MPa), transverse (245 MPa). There are quantitatively big differences between numerical and experimental results, due to some assumption about the thermal conductivity, specific heat, effects of enforced convection of the molten pool, dilution, and volume change during phase transformation caused by actual shield gas. The low fatigue limit at R.T, 300 °C, 500 °C and 700 °C was assessed to be 368, 276, 173 and 137 MPa respectively.
NASA Astrophysics Data System (ADS)
Qin, Lin; Fan, Shanhui; Zhou, Chuanqing
2017-04-01
To implement the optical coherence tomography (OCT) angiography on the low scanning speed OCT system, we developed a joint phase and amplitude method to generate 3-D angiograms by analysing the frequency distribution of signals from non-moving and moving scatterers and separating the signals from the tissue and blood flow with high-pass filter dynamically. This approach firstly compensates the sample motion between adjacent A-lines. Then according to the corrected phase information, we used a histogram method to determine the bulk non-moving tissue phases dynamically, which is regarded as the cut-off frequency of a high-pass filter, and separated the moving and non-moving scatters using the mentioned high-pass filter. The reconstructed image can visualize the components of moving scatters flowing, and enables volumetric flow mapping combined with the corrected phase information. Furthermore, retinal and choroidal blood vessels can be simultaneously obtained by separating the B-scan into retinal part and choroidal parts using a simple segmentation algorithm along the RPE. After the compensation of axial displacements between neighbouring images, three-dimensional vasculature of ocular vessels has been visualized. Experiments were performed to demonstrate the effectiveness of the proposed method for 3-D vasculature imaging of human retina and choroid. The results revealed depth-resolved vasculatures in retina and choroid, suggesting that our approach can be used for noninvasive and three-dimensional angiography with a low-speed clinical OCT, and it has a great potential for clinic application.
Separation of electrocardiographic from electromyographic signals using dynamic filtration.
Christov, Ivaylo; Raikova, Rositsa; Angelova, Silvija
2018-07-01
Trunk muscle electromyographic (EMG) signals are often contaminated by the electrical activity of the heart. During low or moderate muscle force, these electrocardiographic (ECG) signals disturb the estimation of muscle activity. Butterworth high-pass filters with cut-off frequency of up to 60 Hz are often used to suppress the ECG signal. Such filters disturb the EMG signal in both frequency and time domain. A new method based on the dynamic application of Savitzky-Golay filter is proposed. EMG signals of three left trunk muscles and pure ECG signal were recorded during different motor tasks. The efficiency of the method was tested and verified both with the experimental EMG signals and with modeled signals obtained by summing the pure ECG signal with EMG signals at different levels of signal-to-noise ratio. The results were compared with those obtained by application of high-pass, 4th order Butterworth filter with cut-off frequency of 30 Hz. The suggested method is separating the EMG signal from the ECG signal without EMG signal distortion across its entire frequency range regardless of amplitudes. Butterworth filter suppresses the signals in the 0-30 Hz range thus preventing the low-frequency analysis of the EMG signal. An additional disadvantage is that it passes high-frequency ECG signal components which is apparent at equal and higher amplitudes of the ECG signal as compared to the EMG signal. The new method was also successfully verified with abnormal ECG signals. Copyright © 2018. Published by Elsevier Ltd.
Zakian, A; Tehrani-Sharif, M; Mokhber-Dezfouli, M R; Nouri, M; Constable, P D
2017-04-01
To evaluate and validate a hand-held electrochemical meter (Precision Xtra®) as a screening test for subclinical ketosis and hypoglycaemia in lactating dairy cattle. Method comparison study using a convenience sample. Blood samples were collected into plain tubes from the coccygeal vessels of 181 Holstein cows at 2-4 weeks of lactation during summer in Iran. Blood β-hydroxybutyrate concentration (BHB) and glucose concentration were immediately measured by the electrochemical meter after applying 20 μL of blood to the reagent strip. Passing-Bablok regression and Bland-Altman plots were used to determine the accuracy of the meter against laboratory reference methods (BHB dehydrogenase and glucose oxidase). Serum BHB ranged from 0.1 to 7.3 mmol/L and serum glucose ranged from 0.9 to 5.1 mmol/L. Passing-Bablok regression analysis indicated that the electrochemical meter and reference methods were linearly related for BHB and glucose, with a slope estimate that was not significantly different from 1.00. Clinically minor, but statistically significant, differences were present for the intercept value for Passing-Bablok regression analysis for BHB and glucose, and bias estimates in the Bland-Altman plots for BHB and glucose. The electrochemical meter provided a clinically useful method to detect subclinical ketosis and hypoglycaemia in lactating dairy cows. Compared with other method validation studies using the meter, we attributed the improved performance of the electrochemical meter to application of a fixed volume of blood (20 μL) to the reagent strip, use of the meter in hot ambient conditions and use of glucose oxidase as the reference method for glucose analysis. © 2017 Australian Veterinary Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dieterich, S; Trestrail, E; Holt, R
2015-06-15
Purpose: To assess if the TrueBeam HD120 collimator is delivering small IMRT fields accurately and consistently throughout the course of treatment using the SunNuclear PerFraction software. Methods: 7-field IMRT plans for 8 canine patients who passed IMRT QA using SunNuclear Mapcheck DQA were selected for this study. The animals were setup using CBCT image guidance. The EPID fluence maps were captured for each treatment field and each treatment fraction, with the first fraction EPID data serving as the baseline for comparison. The Sun Nuclear PerFraction Software was used to compare the EPID data for subsequent fractions using a Gamma (3%/3mm)more » pass rate of 90%. To simulate requirements for SRS, the data was reanalyzed using a Gamma (3%/1mm) pass rate of 90%. Low-dose, low- and high gradient thresholds were used to focus the analysis on clinically relevant parts of the dose distribution. Results: Not all fractions could be analyzed, because during some of the treatment courses the DICOM tags in the EPID images intermittently change from CU to US (unspecified), which would indicate a temporary loss of EPID calibration. This technical issue is still being investigated. For the remaining fractions, the vast majority (7/8 of patients, 95% of fractions, and 96.6% of fields) are passing the less stringent Gamma criteria. The more stringent Gamma criteria caused a drop in pass rate (90 % of fractions, 84% of fields). For the patient with the lowest pass rate, wet towel bolus was used. Another patient with low pass rates experienced masseter muscle wasting. Conclusion: EPID dosimetry using the PerFraction software demonstrated that the majority of fields passed a Gamma (3%/3mm) for IMRT treatments delivered with a TrueBeam HD120 MLC. Pass rates dropped for a DTA of 1mm to model SRS tolerances. PerFraction pass rates can flag missing bolus or internal shields. Sanjeev Saini is an employee of Sun Nuclear Corporation. For this study, a pre-release version of PerFRACTION 1.1 software from Sun Nuclear Corporation was used.« less
Shuford, Veronica P.; DiVall, Margarita V.; Daugherty, Kimberly K.; Rudolph, Michael J.
2017-01-01
Objective. To examine the extent of financial and faculty resources dedicated to preparing students for NAPLEX and PCOA examinations, and how these investments compare with NAPLEX pass rates. Methods. A 23-item survey was administered to assessment professionals in U.S. colleges and schools of pharmacy (C/SOPs). Institutions were compared by type, age, and student cohort size. Institutional differences were explored according to the costs and types of NAPLEX and PCOA preparation provided, if any, and mean NAPLEX pass rates. Results. Of 134 C/SOPs that received the survey invitation, 91 responded. Nearly 80% of these respondents reported providing some form of NAPLEX preparation. Significantly higher 2015 mean NAPLEX pass rates were found in public institutions, schools that do not provide NAPLEX prep, and schools spending less than $10,000 annually on NAPLEX prep. Only 18 schools reported providing PCOA preparation. Conclusion. Investment in NAPLEX and PCOA preparation resources vary widely across C/SOPs but may increase in the next few years, due to dropping NAPLEX pass rates and depending upon how PCOA data are used. PMID:29109557
Lebovitz, Lisa; Shuford, Veronica P; DiVall, Margarita V; Daugherty, Kimberly K; Rudolph, Michael J
2017-09-01
Objective. To examine the extent of financial and faculty resources dedicated to preparing students for NAPLEX and PCOA examinations, and how these investments compare with NAPLEX pass rates. Methods. A 23-item survey was administered to assessment professionals in U.S. colleges and schools of pharmacy (C/SOPs). Institutions were compared by type, age, and student cohort size. Institutional differences were explored according to the costs and types of NAPLEX and PCOA preparation provided, if any, and mean NAPLEX pass rates. Results. Of 134 C/SOPs that received the survey invitation, 91 responded. Nearly 80% of these respondents reported providing some form of NAPLEX preparation. Significantly higher 2015 mean NAPLEX pass rates were found in public institutions, schools that do not provide NAPLEX prep, and schools spending less than $10,000 annually on NAPLEX prep. Only 18 schools reported providing PCOA preparation. Conclusion. Investment in NAPLEX and PCOA preparation resources vary widely across C/SOPs but may increase in the next few years, due to dropping NAPLEX pass rates and depending upon how PCOA data are used.
Microstructural modification of pure Mg for improving mechanical and biocorrosion properties.
Ahmadkhaniha, D; Järvenpää, A; Jaskari, M; Sohi, M Heydarzadeh; Zarei-Hanzaki, A; Fedel, M; Deflorian, F; Karjalainen, L P
2016-08-01
In this study, the effect of microstructural modification on mechanical properties and biocorrosion resistance of pure Mg was investigated for tailoring a load-bearing orthopedic biodegradable implant material. This was performed utilizing the friction stir processing (FSP) in 1-3 passes to refine the grain size. Microstructure was examined in an optical microscope and scanning electron microscope with an electron backscatter diffraction unit. X-ray diffraction method was used to identify the texture. Mechanical properties were measured by microhardness and tensile testing. Electrochemical impedance spectroscopy was applied to evaluate corrosion behavior. The results indicate that even applying a single pass of FSP refined the grain size significantly. Increasing the number of FSP passes further refined the structure, increased the mechanical strength and intensified the dominating basal texture. The best combination of mechanical properties and corrosion resistance were achieved after three FSP passes. In this case, the yield strength was about six times higher than that of the as-cast Mg and the corrosion resistance was also improved compared to that in the as-cast condition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W
2015-01-01
CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.
DeBrew, Jacqueline Kayler; Lewallen, Lynne Porter
2014-04-01
Making the decision to pass or to fail a nursing student is difficult for nurse educators, yet one that all educators face at some point in time. To make this decision, nurse educators draw from their past experiences and personal reflections on the situation. Using the qualitative method of critical incident technique, the authors asked educators to describe a time when they had to make a decision about whether to pass or fail a student in the clinical setting. The findings describe student and faculty factors important in clinical evaluation decisions, demonstrate the benefits of reflective practice to nurse educators, and support the utility of critical incident technique not only as research methodology, but also as a technique for reflective practice. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gormly, Sherwin J. (Inventor); Flynn, Michael T. (Inventor)
2010-01-01
Method and system for processing of a liquid ("contaminant liquid") containing water and containing urine and/or other contaminants in a two step process. Urine, or a contaminated liquid similar to and/or containing urine and thus having a relatively high salt and urea content is passed through an activated carbon filter to provide a resulting liquid, to remove most of the organic molecules. The resulting liquid is passed through a semipermeable membrane from a membrane first side to a membrane second side, where a fortified drink having a lower water concentration (higher osmotic potential) than the resulting liquid is positioned. Osmotic pressure differential causes the water, but not most of the remaining inorganic (salts) contaminant(s) to pass through the membrane to the fortified drink. Optionally, the resulting liquid is allowed to precipitate additional organic molecules before passage through the membrane.
Trace impurities analysis determined by neutron activation in the PbI 2 crystal semiconductor
NASA Astrophysics Data System (ADS)
Hamada, M. M.; Oliveira, I. B.; Armelin, M. J.; Mesquita, C. H.
2003-06-01
In this work, a methodology for impurity analysis of PbI 2 was studied to investigate the effectiveness of the purification. Commercial salts were purified by the multi passes zone refining and grown by the Bridgman method. To evaluate the purification efficiency, samples from the bottom, middle and upper sections of the ZR ingot were analyzed after 200, 300 and 500 purification passes, by measurements of the impurity concentrations, using the neutron activation analysis (NAA) technique. There was a significant reduction of the impurities according to the purification numbers. The reduction efficiency was different for each element, namely: Au>Mn>Co˜Ag>K˜Br. The impurity concentration of the crystals grown after 200, 300 and 500 passes and the PbI 2 starting material were analyzed by NAA and plasma optical emission spectroscopy.
Thermodynamics Study of Removal of Heavy Metal by TiN-Nanotubes
NASA Astrophysics Data System (ADS)
Mahdavian, Leila
2015-12-01
The ability of TiN-nanotube to remove lead (Pb(II)) and arsenic (As(III)) ions from aqueous solutions is investigated. The thermodynamics properties of Pb(II) and As(III) ions passing through TiN-nanotubes (TiN-NTs) is calculated in basis set (B3LYP/6-31G**) DFT-IR method by Gaussian program package. The results showed, Pb(II) and As(III) passing through had low potential in middle nanotubes, and are trapped in this place. The thermodynamic properties showed; the passing through are spontaneous and favorable because ΔGele (MJ/mol) is negative for them. The goal of this study is the detection of surface species of TiN-NTs for metal ions removal by using computer calculations. The structural and thermodynamic properties studied ions absorption on TiN-NTs at room temperature.
Investigation of distributor vane jets to decrease the unsteady load on hydro turbine runner blades
NASA Astrophysics Data System (ADS)
Lewis, B. J.; Cimbala, J. M.; Wouden, A. M.
2012-11-01
As the runner blades of a Francis hydroturbine pass though the wakes created from the wicket gates, they experience a significant change in absolute velocity, flow angle, and pressure. The concept of adding jets to the trailing edge of the wicket gates is proposed as a method for reducing the dynamic load on the hydroturbine runner blades. Computational experiments show a decrease in velocity variation experienced by the runner blade with the addition of the jets. The decrease in velocity variation resulted in a 43% decrease in global torque variation at the runner passing frequency. However, an increased variation was observed at the wicket gate passing frequency. Also, a 5.7% increase in average global torque was observed with the addition of blowing from the trailing-edge of the wicket gates.
Adeniyi, Olasupo Stephen; Ogli, Sunday Adakole; Ojabo, Cecelia Omaile; Musa, Danladi Ibrahim
2013-01-01
Background: This study was carried out to assess the relationship between thevarious assessment parameters, viz. continuous assessment (CA), multiple choice questions (MCQ), essay, practical, oral with the overall performance in the first professional examination in Physiology. Materials and Methods: The results of all 244 students that sat for the examination over 4 years were used. The CA, MCQ, essay, practical, oral and overall performance scores were obtained. All the scores were rounded up to 100% to give each parameter equal weighting. Results: Analysis showed that the average overall performance was 50.8 ± 5.3. The best average performance was in practical (55.5 ± 9.1), while the least was in MCQ (44.1 ± 7.8). In the study, 81.1% of students passed orals, 80.3% passed practical, 72.5% passed CA, 58.6% passed essay, 22.5% passed MCQ and 71.7% of students passed on the overall performance. All assessment parameters significantly correlated with overall performance. Continuous assessment had the best correlation (r = 0.801, P = 0.000), while oral had the least correlation (r = 0.277, P = 0.000) with overall performance. Essay was the best predictor of overall performance (β = 0.421, P = 000), followed by MCQ (β = 0.356, P = 000), while practical was the least predictor of performance (β = 0.162, P = 000). Conclusion: We suggest that the department should uphold the principle of continuous assessment and more effort be made in the design of MCQ so that performance can improve. PMID:24403705
Danchenko, Vitaliy G [Dnipropetrovsk, UA; Noyes, Ronald T [Stillwater, OK; Potapovych, Larysa P [Dnipropetrovsk, UA
2012-02-28
Aeration drying and disinfecting grain crops in bulk and pretreating seeds includes passing through a bulk of grain crops and seeds disinfecting and drying agents including an ozone and air mixture and surrounding air, subdividing the disinfecting and drying agents into a plurality of streams spaced from one another in a vertical direction, and passing the streams at different heights through levels located at corresponding heights of the bulk of grain crops and seeds transversely in a substantially horizontal direction.
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
2007-01-01
An ion thrusting system is disclosed comprising an ionization membrane having at least one area through which a gas is passed, and which ionizes the gas molecules passing therethrough to form ions and electrons, and an accelerator element which accelerates the ions to form thrust. In some variations, a potential is applied to the ionization membrane may be reversed to thrust ions in an opposite direction. The ionization membrane may also include an opening with electrodes that are located closer than a mean free path of the gas being ionized. Methods of manufacture and use are also provided.
1983-08-01
drift speed of about 10 ka /day, which places it at the circular drifter track on the release day (185). Note that all drifters south of Pt. Arena were...spatial resolu- tion of 4 ka . *.a Data Collection The NOAA satellites are polar-orbiting satellites which pass overhead twice per day. One pass is...then decreases so that the maximum signal-to-noise occurs for 10 km x 10 ka boxes. Thus at a 12-hour separation in time temperature features of 78 less
NASA Astrophysics Data System (ADS)
Kapranov, B. I.; Mashanov, A. P.
2017-04-01
This paper presents the results of research and describes the apparatus for measuring the acoustic characteristics of bulk materials. Ultrasound, it has passed through a layer of bulk material, is further passes through an air gap. The presence of air gap prevents from measuring tract mechanical contacts, but complicates the measurement technology Studies were conducted on the example of measuring the acoustic characteristics of the widely used perlite-based sound-proofing material.
Automated brush plating process for solid oxide fuel cells
Long, Jeffrey William
2003-01-01
A method of depositing a metal coating (28) on the interconnect (26) of a tubular, hollow fuel cell (10) contains the steps of providing the fuel cell (10) having an exposed interconnect surface (26); contacting the inside of the fuel cell (10) with a cathode (45) without use of any liquid materials; passing electrical current through a contacting applicator (46) which contains a metal electrolyte solution; passing the current from the applicator (46) to the cathode (45) and contacting the interconnect (26) with the applicator (46) and coating all of the exposed interconnect surface.
Variable flexure-based fluid filter
Brown, Steve B.; Colston, Jr., Billy W.; Marshall, Graham; Wolcott, Duane
2007-03-13
An apparatus and method for filtering particles from a fluid comprises a fluid inlet, a fluid outlet, a variable size passage between the fluid inlet and the fluid outlet, and means for adjusting the size of the variable size passage for filtering the particles from the fluid. An inlet fluid flow stream is introduced to a fixture with a variable size passage. The size of the variable size passage is set so that the fluid passes through the variable size passage but the particles do not pass through the variable size passage.
Using rewards and penalties to obtain desired subject performance
NASA Technical Reports Server (NTRS)
Cook, M.; Jex, H. R.; Stein, A. C.; Allen, R. W.
1981-01-01
Operant conditioning procedures, specifically the use of negative reinforcement, in achieving stable learning behavior is described. The critical tracking test (CTT) a method of detecting human operator impairment was tested. A pass level is set for each subject, based on that subject's asymptotic skill level while sober. It is critical that complete training take place before the individualized pass level is set in order that the impairment can be detected. The results provide a more general basis for the application of reward/penalty structures in manual control research.
Apparatus and method for heating a material in a transparent ampoule. [crystal growth
NASA Technical Reports Server (NTRS)
Holland, L. R. (Inventor)
1983-01-01
An improved process for heating a material within a fused silica ampoule by radiation through the wall of the ampoule, while simultaneously passing a cooling gas around the ampoule is described. The radiation passes through a screen of fused silica so as to remove those components capable of directly heating the silica, therby increasing the temperature of the material within the ampoule above the strain point of the ampoule, while maintaining the exterior of the ampoule cool enough to prevent rupturing the amp.
Apparatus and method for evaporator defrosting
Mei, Viung C.; Chen, Fang C.; Domitrovic, Ronald E.
2001-01-01
An apparatus and method for warm-liquid defrosting of the evaporator of a refrigeration system. The apparatus includes a first refrigerant expansion device that selectively expands refrigerant for cooling the evaporator, a second refrigerant expansion device that selectively expands the refrigerant after the refrigerant has passed through the evaporator, and a defrosting control for the first refrigerant expansion device and second refrigerant expansion device to selectively defrost the evaporator by causing warm refrigerant to flow through the evaporator. The apparatus is alternately embodied with a first refrigerant bypass and/or a second refrigerant bypass for selectively directing refrigerant to respectively bypass the first refrigerant expansion device and the second refrigerant expansion device, and with the defrosting control connected to the first refrigerant bypass and/or the second refrigerant bypass to selectively activate and deactivate the bypasses depending upon the current cycle of the refrigeration system. The apparatus alternately includes an accumulator for accumulating liquid and/or gaseous refrigerant that is then pumped either to a refrigerant receiver or the first refrigerant expansion device for enhanced evaporator defrosting capability. The inventive method of defrosting an evaporator in a refrigeration system includes the steps of compressing refrigerant in a compressor and cooling the refrigerant in the condenser such that the refrigerant is substantially in liquid form, passing the refrigerant substantially in liquid form through the evaporator, and expanding the refrigerant with a refrigerant expansion device after the refrigerant substantially passes through the evaporator.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
ERIC Educational Resources Information Center
Bergbower, Matthew L.
2017-01-01
For many political science programs, research methods courses are a fundamental component of the recommended undergraduate curriculum. However, instructors and students often see these courses as the most challenging. This study explores when it is most appropriate for political science majors to enroll and pass a research methods course. The…
METHOD OF OPERATING A NEUTRONIC REACTOR
Turkevich, A.
1963-01-22
This patent relates to one step in a method of operating a neutronic reactor consisting of a slurry of fissionable material in heavy water. Deuterium gas is passed through the slurry to sweep gaseous fission products therefrom and the deuterium is then separated from the gaseous fission products. (AEC)
NASA Astrophysics Data System (ADS)
Song, Qing; Zhu, Sijia; Yan, Han; Wu, Wenqian
2008-03-01
Parallel light projection method for the diameter measurement is to project the workpiece to be measured on the photosensitive units of CCD, but the original signal output from CCD cannot be directly used for counting or measurement. The weak signal with high-frequency noise should be filtered and amplified firstly. This paper introduces RC low-pass filter and multiple feed-back second-order low-pass filter with infinite gain. Additionally there is always dispersion on the light band and the output signal has a transition between the irradiant area and the shadow, because of the instability of the light source intensity and the imperfection of the light system adjustment. To obtain exactly the shadow size related to the workpiece diameter, binary-value processing is necessary to achieve a square wave. Comparison method and differential method can be adopted for binary-value processing. There are two ways to decide the threshold value when using voltage comparator: the fixed level method and the floated level method. The latter has a high accuracy. Deferential method is to output two spike pulses with opposite pole by the rising edge and the failing edge of the video signal related to the differential circuit firstly, then the rising edge of the signal output from the differential circuit is acquired by half-wave rectifying circuit. After traveling through the zero passing comparator and the maintain- resistance edge trigger, the square wave which indicates the measured size is acquired at last. And then it is used for filling through standard pulses and for counting through the counter. Data acquisition and information processing is accomplished by the computer and the control software. This paper will introduce in detail the design and analysis of the filter circuit, binary-value processing circuit and the interface circuit towards the computer.
PC_Eyewitness: evaluating the New Jersey method.
MacLin, Otto H; Phelan, Colin M
2007-05-01
One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation
NASA Astrophysics Data System (ADS)
Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei
2016-11-01
Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.
Hadamard multimode optical imaging transceiver
Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R
2012-10-30
Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.
Infrared fix pattern noise reduction method based on Shearlet Transform
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin
2018-06-01
The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.
Video quality assessment using motion-compensated temporal filtering and manifold feature similarity
Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju
2017-01-01
Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489
Forest canopy height estimation using double-frequency repeat pass interferometry
NASA Astrophysics Data System (ADS)
Karamvasis, Kleanthis; Karathanassi, Vassilia
2015-06-01
In recent years, many efforts have been made in order to assess forest stand parameters from remote sensing data, as a mean to estimate the above-ground carbon stock of forests in the context of the Kyoto protocol. Synthetic aperture radar interferometry (InSAR) techniques have gained traction in last decade as a viable technology for vegetation parameter estimation. Many works have shown that forest canopy height, which is a critical parameter for quantifying the terrestrial carbon cycle, can be estimated with InSAR. However, research is still needed to understand further the interaction of SAR signals with forest canopy and to develop an operational method for forestry applications. This work discusses the use of repeat pass interferometry with ALOS PALSAR (L band) HH polarized and COSMO Skymed (X band) HH polarized acquisitions over the Taxiarchis forest (Chalkidiki, Greece), in order to produce accurate digital elevation models (DEMs) and estimate canopy height with interferometric processing. The effect of wavelength-dependent penetration depth into the canopy is known to be strong, and could potentially lead to forest canopy height mapping using dual-wavelength SAR interferometry at X- and L-band. The method is based on scattering phase center separation at different wavelengths. It involves the generation of a terrain elevation model underneath the forest canopy from repeat-pass L-band InSAR data as well as the generation of a canopy surface elevation model from repeat pass X-band InSAR data. The terrain model is then used to remove the terrain component from the repeat pass interferometric X-band elevation model, so as to enable the forest canopy height estimation. The canopy height results were compared to a field survey with 6.9 m root mean square error (RMSE). The effects of vegetation characteristics, SAR incidence angle and view geometry, and terrain slope on the accuracy of the results have also been studied in this work.
NASA Technical Reports Server (NTRS)
Mumaw, Susan J. (Inventor); Evers, Jeffrey (Inventor); Craig, Calvin L., Jr. (Inventor); Walker, Stuart D. (Inventor)
2001-01-01
The invention is a circuit and method of limiting the charging current voltage from a power supply net work applied to an individual cell of a plurality of cells making up a battery being charged in series. It is particularly designed for use with batteries that can be damaged by overcharging, such as Lithium-ion type batteries. In detail. the method includes the following steps: 1) sensing the actual voltage level of the individual cell; 2) comparing the actual voltage level of the individual cell with a reference value and providing an error signal representative thereof; and 3) by-passing the charging current around individual cell necessary to keep the individual cell voltage level generally equal a specific voltage level while continuing to charge the remaining cells. Preferably this is accomplished by by-passing the charging current around the individual cell if said actual voltage level is above the specific voltage level and allowing the charging current to the individual cell if the actual voltage level is equal or less than the specific voltage level. In the step of bypassing the charging current, the by-passed current is transferred at a proper voltage level to the power supply. The by-pass circuit a voltage comparison circuit is used to compare the actual voltage level of the individual cell with a reference value and to provide an error signal representative thereof. A third circuit, designed to be responsive to the error signal, is provided for maintaining the individual cell voltage level generally equal to the specific voltage level. Circuitry is provided in the third circuit for bypassing charging current around the individual cell if the actual voltage level is above the specific voltage level and transfers the excess charging current to the power supply net work. The circuitry also allows charging of the individual cell if the actual voltage level is equal or less than the specific voltage level.
MPIRUN: A Portable Loader for Multidisciplinary and Multi-Zonal Applications
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Woodrow, Thomas S. (Technical Monitor)
1994-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. One method for implementing the message passing portion of these codes is with the new Message Passing Interface (MPI) standard. Unfortunately, this standard only specifies the message passing portion of an application, but does not specify any portable mechanisms for loading an application. MPIRUN was developed to provide a portable means for loading MPI programs, and was specifically targeted at multidisciplinary and multi-zonal applications. Programs using MPIRUN for loading and MPI for message passing are then portable between all machines supported by MPIRUN. MPIRUN is currently implemented for the Intel iPSC/860, TMC CM5, IBM SP-1 and SP-2, Intel Paragon, and workstation clusters. Further, MPIRUN is designed to be simple enough to port easily to any system supporting MPI.
Villanueva, Ariadna; Guanche, Humberto
2016-11-01
Aim To describe the effect of education on environmental cleaning in patient care areas using adenosine triphosphate (ATP) readings. Method A quality improvement initiative was developed in a community hospital in Qatar. Over a two-month period, an infection-control practitioner monitored ATP readings in patient care areas, at any time and regardless of the time of the previous disinfection. The initiative included staff education, use of ATP readings and the drawing up of quarterly quality reports. The ATP readings were considered 'pass', meaning well cleaned, or 'fail', meaning non-cleaned, according to the following standards:>250 relative light units (RLU) in non-critical units and<200RLU for critical units. The proportion of test passes was calculated per 100 tests performed. Results A total of 1,617 tests were performed, after which 1,259 (78%) surfaces were identified as well cleaned. The lowest proportion of non-pass and higher ATP readings was observed in non-critical areas. The test points with the lowest proportion of passes were telephones (40.5%), a medication dispensing system (58.5%), an oximeter (66.7%) and callbox buttons (67.6%). A sustained increase in test passes was observed during the study period. Conclusion There was an improvement in environmental cleaning due to monitoring of ATP on surfaces and staff education.
Fundamentals of Laparoscopic Surgery: A Surgical Skills Assessment Tool in Gynecology
Arden, Deborah; Dodge, Laura E.; Zheng, Bin; Ricciotti, Hope A.
2011-01-01
Objective: To describe our experience with the Fundamentals of Laparoscopic Surgery (FLS) program as a teaching and assessment tool for basic laparoscopic competency among gynecology residents. Methods: A prospective observational study was conducted at a single academic institution. Before the FLS program was introduced, baseline FLS testing was offered to residents and gynecology division directors. Test scores were analyzed by training level and self-reported surgical experience. After implementing a minimally invasive gynecologic surgical curriculum, third-year residents were retested. Results: The pass rates for baseline FLS skills testing were 0% for first-year residents, 50% for second-year residents, and 75% for third- and fourth-year residents. The pass rates for baseline cognitive testing were 60% for first- and second-year residents, 67% for third-year residents, and 40% for fourth-year residents. When comparing junior and senior residents, there was a significant difference in pass rates for the skills test (P=.007) but not the cognitive test (P=.068). Self-reported surgical experience strongly correlated with skills scores (r-value=0.97, P=.0048), but not cognitive scores (r-value=0.20, P=.6265). After implementing a curriculum, 100% of the third-year residents passed the skills test, and 92% passed the cognitive examination. Conclusions: The FLS skills test may be a valuable assessment tool for gynecology residents. The cognitive test may need further adaptation for applicability to gynecologists. PMID:21902937
Li, Zhaoyang; Kurita, Takashi; Miyanaga, Noriaki
2017-10-20
Zigzag and non-zigzag beam waist shifts in a multiple-pass zigzag slab amplifier are investigated based on the propagation of a Gaussian beam. Different incident angles in the zigzag and non-zigzag planes would introduce a direction-dependent waist-shift-difference, which distorts the beam quality in both the near- and far-fields. The theoretical model and analytical expressions of this phenomenon are presented, and intensity distributions in the two orthogonal planes are simulated and compared. A geometrical optics compensation method by a beam with 90° rotation is proposed, which not only could correct the direction-dependent waist-shift-difference but also possibly average the traditional thermally induced wavefront-distortion-difference between the horizontal and vertical beam directions.
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures is studied. Lattice filters were used to identify the structural dynamics model of the flexible structures. This identification model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures is control engaged. This type of validation scheme prevents instability when the overall loop is closed. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods. The method uses the Linear Quadratic Guassian/Loop Transfer Recovery (LQG/LTR) approach to ensure stability against unmodeled higher frequency modes and achieves the desired performance.
Single Pass Streaming BLAST on FPGAs*†
Herbordt, Martin C.; Model, Josh; Sukhwani, Bharat; Gu, Yongfeng; VanCourt, Tom
2008-01-01
Approximate string matching is fundamental to bioinformatics and has been the subject of numerous FPGA acceleration studies. We address issues with respect to FPGA implementations of both BLAST- and dynamic-programming- (DP) based methods. Our primary contribution is a new algorithm for emulating the seeding and extension phases of BLAST. This operates in a single pass through a database at streaming rate, and with no preprocessing other than loading the query string. Moreover, it emulates parameters turned to maximum possible sensitivity with no slowdown. While current DP-based methods also operate at streaming rate, generating results can be cumbersome. We address this with a new structure for data extraction. We present results from several implementations showing order of magnitude acceleration over serial reference code. A simple extension assures compatibility with NCBI BLAST. PMID:19081828
Statistical physics of hard combinatorial optimization: Vertex cover problem
NASA Astrophysics Data System (ADS)
Zhao, Jin-Hua; Zhou, Hai-Jun
2014-07-01
Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-18
.... CP13-553-000] Sabine Pass Liquefaction Expansion, LLC, Sabine Pass Liquefaction, LLC, and Sabine Pass... 30, 2013, Sabine Pass Liquefaction Expansion, LLC, Sabine Pass Liquefaction, LLC, and Sabine Pass LNG, L.P. (collectively referred to as Sabine Pass) filed with the Federal Energy Regulatory Commission...
NASA Astrophysics Data System (ADS)
Poursina, Mohammad; Anderson, Kurt S.
2014-08-01
This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method.
Sturnieks, Daina L; Yak, Sin Lin; Ratanapongleka, Mayna; Lord, Stephen R; Menant, Jasmine C
2018-06-01
Fatigue is a common complaint in older people. Laboratory-induced muscle fatigue has been found to affect physical functions in older populations but these protocols are rigorous and are unlikely to accurately reflect daily activities. This study used an ecological approach to determine the effects of a busy day on self-reported fatigue and fall-related measures of physical and cognitive function in older people. Fifty community-dwelling adult volunteers, aged 60-88 (mean 73) years participated in this randomised crossover trial. Participants undertook assessments of balance, strength, gait, mobility, cognitive function and self-reported fatigue, before and after a planned rest day and a planned busy day (randomly allocated) at least one week apart. Participants wore an activity monitor on both the rest and busy days. On average, participants undertook twice as many steps and 2.5 times more minutes of activity on the busy, compared with the rest day. Participants had a significant increase in self-reported fatigue on the afternoon of the busy day and no change on the rest day. Repeated measures ANOVAs found no significant day (rest/busy) × time (am/pm) interaction effects, except for the timed up and go test of mobility, resulting from relatively improved mobility performance over the rest day, compared with the busy day. This study showed few effects of a busy day on physical and cognitive performance tests associated with falls in older people. Copyright © 2018 Elsevier Inc. All rights reserved.
Method of removing SO.sub.2, NO.sub.X and particles from gas mixtures using streamer corona
Mizuno, Akira; Clements, Judson S.
1987-01-01
A method for converting sulfur dioxide and/or nitrogen oxide gases to acid mist and or particle aerosols is disclosed in which the gases are passed through a streamer corona discharge zone having electrodes of a wire-cylinder or wire-plate geometry.
[Nursing research and positivism].
de Almeida, A M; de Oliveira, E R; Garcia, T R
1996-04-01
The authors discuss positivism foundations; the most remarkable influences to comtian's philosophy structuration; how Comte's ideas, as doctrine in so far as method, dominate society's thoughts, passing beyond XIX century and reaching XX century; and how the positivist doctrine and empirical method influenced Nursing conceptions about science and about human being/environment/disease.
DOT National Transportation Integrated Search
2013-11-01
To increase RAP materials by up to 75% by binder replacement, a fractionation method was applied to the RAP stockpile by : discarding RAP materials passing No. 16 sieve. This fractionation method was effective in improving volumetric properties : of ...
Method of removing cesium from steam
Carson, Jr., Neill J.; Noland, Robert A.; Ruther, Westly E.
1991-01-01
Method for removal of radioactive cesium from a hot vapor, such as high temperature steam, including the steps of passing input hot vapor containing radioactive cesium into a bed of silicate glass particles and chemically incorporating radioactive cesium in the silicate glass particles at a temperature of at least about 700.degree. F.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
[PATHOGENETIC ASPECTS OF REHABILITATION OF PATIENTS AFTER CHOLECYSTECTOMY].
Efendiyeva, M T; Abdurakhmanova, A Z
2015-01-01
Investigation of efficiency of liquid synbiotics and structure-resonance electric magnetic therapy (SRMT) among patients after cholecystectomy. 90 patients after cholecystectomy have been investigated (CE). Along with general clinical meth-ods of investigation, patients passed US investigation of abdomen, biochemical blood tests, bacteriological test of faeces, investigation of short-chain fatty acids (SCFA) by gas-liquid osteal chromatographic analysis. State of vegetative nervous system passed analysis according to variability of heart rhythm (VHR) by spectral analysis method using "Cardiac technic 4000 AD" cardiac monitor in frame of 24-hr ECG monitoring. Estimation of life quality (LQ) of patients after cholecystectomy has been conducted by "SF-36 Health status survey". Patients have been divided into 3 groups, comparable according to the main clinical and functional indicators. Patients of first group (30 people) passed correction of dysbiosis by liquid synbiotics. Patients of a second group (30 persons) passed complex treatment of SRMT and liquid synbiotics. Control group was composed by 30 patients after cholecystectomy who had been receiving diet therapy. In term of investigation 90% of patients have shown decrease of number and methabolic activity of microflora, change of activity of anaerobic microorganisms. Analysis of variability of heart rhythm have displayed relative prevalence of sympathetic modulation of a rhythm on the background of elevated ergotropic component of the total capacity of a spectrum; estimation of life quality (LQ) has shown that limitation of physical activity is a most considerable contribution to decrease of LQ among patients after cholecystectomy. After a course of liquid synbiotics and SMRT recovery and improvement of intestines and improvement of all indicator of life quality is observed.
NASA Astrophysics Data System (ADS)
Carette, Yannick; Vanhove, Hans; Duflou, Joost
2018-05-01
Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.
Rodnick, Melissa E.; Brooks, Allen F.; Hockley, Brian G.; Henderson, Bradford D.; Scott, Peter J. H.
2013-01-01
Introduction A novel one-pot method for preparing [18F]fluoromethylcholine ([18F]FCH) via in situ generation of [18F]fluoromethyl tosylate ([18F]FCH2OTs), and subsequent [18F]fluoromethylation of dimethylaminoethanol (DMAE), has been developed. Methods [18F]FCH was prepared using a GE TRACERlab FXFN, although the method should be readily adaptable to any other fluorine-18 synthesis module. Initially ditosylmethane was fluorinated to generate [18F]FCH2OTs. DMAE was then added and the reaction was heated at 120°C for 10 min to generate [18F]FCH. After this time, reaction solvent was evaporated, and the crude reaction mixture was purified by solid-phase extraction using C18-Plus and CM-Light Sep-Pak cartridges to provide [18F]FCH formulated in USP saline. The formulated product was passed through a 0.22 μm filter into a sterile dose vial, and submitted for quality control testing. Total synthesis time was 1.25 hours from end-of-bombardment. Results Typical non-decay-corrected yields of [18F]FCH prepared using this method were 91 mCi (7% non-decay corrected based upon ~1.3 Ci [18F]fluoride), and doses passed all other quality control (QC) tests. Conclusion A one-pot liquid-phase synthesis of [18F]FCH has been developed. Doses contain extremely low levels of residual DMAE (31.6 μg / 10 mL dose or ~3 ppm) and passed all other requisite QC testing, confirming their suitability for use in clinical imaging studies. PMID:23665261
Broadband unidirectional ultrasound propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, Dipen N.; Pantea, Cristian
A passive, linear arrangement of a sonic crystal-based apparatus and method including a 1D sonic crystal, a nonlinear medium, and an acoustic low-pass filter, for permitting unidirectional broadband ultrasound propagation as a collimated beam for underwater, air or other fluid communication, are described. The signal to be transmitted is first used to modulate a high-frequency ultrasonic carrier wave which is directed into the sonic crystal side of the apparatus. The apparatus processes the modulated signal, whereby the original low-frequency signal exits the apparatus as a collimated beam on the side of the apparatus opposite the sonic crystal. The sonic crystalmore » provides a bandpass acoustic filter through which the modulated high-frequency ultrasonic signal passes, and the nonlinear medium demodulates the modulated signal and recovers the low-frequency sound beam. The low-pass filter removes remaining high-frequency components, and contributes to the unidirectional property of the apparatus.« less
Device, system and method for a sensing electrical circuit
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
2009-01-01
The invention relates to a driven ground electrical circuit. A driven ground is a current-measuring ground termination to an electrical circuit with the current measured as a vector with amplification. The driven ground module may include an electric potential source V.sub.S driving an electric current through an impedance (load Z) to a driven ground. Voltage from the source V.sub.S excites the minus terminal of an operational amplifier inside the driven ground which, in turn, may react by generating an equal and opposite voltage to drive the net potential to approximately zero (effectively ground). A driven ground may also be a means of passing information via the current passing through one grounded circuit to another electronic circuit as input. It may ground one circuit, amplify the information carried in its current and pass this information on as input to the next circuit.
RF-MEMS tunable interdigitated capacitor and fixed spiral inductor for band pass filter applications
NASA Astrophysics Data System (ADS)
Bade, Ladon Ahmed; Dennis, John Ojur; Khir, M. Haris Md; Wen, Wong Peng
2016-11-01
This research presents the tunable Radio Frequency Micro Electromechanical Systems (RF-MEMS) coupled band-pass filter (BPF), which possess a wide tuning range and constructed by using the Chebyshev fourth degree equivalent circuit consisting of fixed inductors and interdigitated tunable capacitors. The suggested method was authenticated by designing a new tunable BPF with a 100% tuning range from 3.1 GHz to 4.9 GHz. The Metal Multi-User MEMS Process (Metal MUMPs) was involved in the process of design of this band-pass filter. It aimed to achieve the reconfiguration of frequencies and show high efficiency of RF in the applications that using Ultra Wide Band (UWB) such as wireless sensor networks. The RF performance of this filter was found to be very satisfactory due to its simple fabrication. Moreover, it showed less insertion loss of around 4 dB and high return loss of around 20 dB.
Advances of the smooth variable structure filter: square-root and two-pass formulations
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Lee, Andrew S.
2017-01-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilize a switching gain that brings an inherent amount of stability to the estimation process. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter's algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation, and the results are compared with the popular Kalman filter. In addition, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the Kalman-based two-pass smoother.
NASA Astrophysics Data System (ADS)
Traxler, Lukas; Reutterer, Bernd; Bayer, Natascha; Drauschke, Andreas
2017-04-01
To treat cataract intraocular lenses (IOLs) are used to replace the clouded human eye lens. Due to postoperative healing processes the IOL can displace within the eye, which can lead to deteriorated quality of vision. To test and characterize these effect an IOL can be embedded into a model of the humane eye. One informative measure are wavefront aberrations. In this paper three different setups, the typical double-pass configuration (DP), a single-pass (SP1) where the measured light travels in the same direction as in DP and a single-pass (SP2) with reversed direction, are investigated. All three setups correctly measure the aberrations of the eye, where SP1 is found to be the simplest to set up and align. Because of the lowest complexity it is the proposed method for wavefront measurement in model eyes.
2014-01-01
Background To frame interventions, it is useful to understand context- and time-specific correlates of children’s physical activity. To do this, we need accurate assessment of these correlates. There are currently no measures that assess correlates at all levels of the social ecological model, contain items that are specifically worded for the lunchtime and/or after-school time periods, and assess correlates that have been conceptualised and defined by children. The aim of this study was to develop and evaluate the psychometric properties of the lunchtime and after-school Youth Physical Activity Survey for Specific Settings (Y-PASS) questionnaires. Methods The Y-PASS questionnaire was administered to 264 South Australian children (146 boys, 118 girls; mean age = 11.7 ± 0.93 years). Factorial structure and internal consistency of the intrapersonal, sociocultural and physical environmental/policy lunchtime and after-school subscales were examined through an exploratory factor analysis. The test-retest reliability of the Y-PASS subscales was assessed over a one-week period on a subsample of children (lunchtime Y-PASS: n = 12 boys, 12 girls, mean age of 11.6 ± 0.8 years; after-school Y-PASS: n = 9 boys, 13 girls; mean age = 11.4 ± 0.9 years). Results For the lunchtime Y-PASS, three factors were identified under each of the intrapersonal, sociocultural and physical environmental/policy subscales. For the after-school Y-PASS, six factors were identified in the intrapersonal subscale, four factors in the sociocultural subscale and seven factors in the physical environmental/policy subscale. Following item reduction, all subscales demonstrated acceptable internal consistency (Cronbach alpha = 0.78 – 0.85), except for the lunchtime sociocultural subscale (Cronbach alpha = 0.55). The factors and items demonstrated fair to very high test-retest reliability (ICC = 0.26 – 0.93). Conclusion The preliminary reliability and factorial structure evidence suggests the Y-PASS correlate questionnaires are robust tools for measuring correlates of context-specific physical activity in children. The multi-dimensional factor structure provides justification for exploring physical activity correlates from a social ecological perspective and demonstrates the importance of developing items that are context specific. Further development and refinement of the Y-PASS questionnaires is recommended, including a confirmatory factor analysis and exploring the inclusion of additional items. PMID:24885601
Detecting the severity of perinatal anxiety with the Perinatal Anxiety Screening Scale (PASS).
Somerville, Susanne; Byrne, Shannon L; Dedman, Kellie; Hagan, Rosemary; Coo, Soledad; Oxnam, Elizabeth; Doherty, Dorota; Cunningham, Nadia; Page, Andrew C
2015-11-01
The Perinatal Anxiety Screening Scale (PASS; Somerville et al., 2014) reliably identifies perinatal women at risk of problematic anxiety when a clinical cut-off score of 26 is used. This study aimed to identify a severity continuum of anxiety symptoms with the PASS to enhance screening, treatment and research for perinatal anxiety. Antenatal and postnatal women (n=410) recruited from the antenatal clinics and mental health services at an obstetric hospital completed the Edinburgh Postnatal Depression Scale (EPDS), the Depression, Anxiety and Stress Scale (DASS-21), the Spielberg State-Trait Anxiety Inventory (STAI), the Beck Depression Inventory II (BDI), and the PASS. The women referred to mental health services were assessed to determine anxiety diagnoses via a diagnostic interview conducted by an experienced mental health professional from the Department of Psychological Medicine - King Edward Memorial Hospital. Three normative groups for the PASS, namely minimal anxiety, mild-moderate anxiety, and severe anxiety, were identified based on the severity of anxiety indicated on the standardised scales and anxiety diagnoses. Two cut-off points for the normative groups were calculated using the Jacobson-Truax method (Jacobson and Truax, 1991) resulting in three severity ranges: 'minimal anxiety'; 'mild-moderate anxiety'; and 'severe anxiety'. The most frequent diagnoses in the study sample were adjustment disorder, mixed anxiety and depression, generalised anxiety, and post-traumatic stress disorder. This may limit the generalisability of the severity range results to other anxiety diagnoses including obsessive compulsive disorder and specific phobia. Severity ranges for the PASS add value to having a clinically validated cut-off score in the detection and monitoring of problematic perinatal anxiety. The PASS can now be used to identify risk of an anxiety disorder and the severity ranges can indicate developing risk for early referrals for further assessments, prioritisation of access to resources and tracking of clinically significant deterioration, improvement or stability in anxiety over time. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Mortazavi, Forough; Mortazavi, Saideh S; Khosrorad, Razieh
2015-09-01
Procrastination is a common behavior which affects different aspects of life. The procrastination assessment scale-student (PASS) evaluates academic procrastination apropos its frequency and reasons. The aims of the present study were to translate, culturally adapt, and validate the Farsi version of the PASS in a sample of Iranian medical students. In this cross-sectional study, the PASS was translated into Farsi through the forward-backward method, and its content validity was thereafter assessed by a panel of 10 experts. The Farsi version of the PASS was subsequently distributed among 423 medical students. The internal reliability of the PASS was assessed using Cronbach's alpha. An exploratory factor analysis (EFA) was conducted on 18 items and then 28 items of the scale to find new models. The construct validity of the scale was assessed using both EFA and confirmatory factor analysis. The predictive validity of the scale was evaluated by calculating the correlation between the academic procrastination scores and the students' average scores in the previous semester. The corresponding reliability of the first and second parts of the scale was 0.781 and 0.861. An EFA on 18 items of the scale found 4 factors which jointly explained 53.2% of variances: The model was marginally acceptable (root mean square error of approximation [RMSEA] =0.098, standardized root mean square residual [SRMR] =0.076, χ(2) /df =4.8, comparative fit index [CFI] =0.83). An EFA on 28 items of the scale found 4 factors which altogether explained 42.62% of variances: The model was acceptable (RMSEA =0.07, SRMR =0.07, χ(2)/df =2.8, incremental fit index =0.90, CFI =0.90). There was a negative correlation between the procrastination scores and the students' average scores (r = -0.131, P =0.02). The Farsi version of the PASS is a valid and reliable tool to measure academic procrastination in Iranian undergraduate medical students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Peng, J; Xie, J
2015-06-15
Purpose: The purpose of this study is to investigate the sensitivity of the planar quality assurance to MLC errors with different beam complexities in intensity-modulate radiation therapy. Methods: sixteen patients’ planar quality assurance (QA) plans in our institution were enrolled in this study, including 10 dynamic MLC (DMLC) IMRT plans measured by Portal Dosimetry and 6 static MLC (SMLC) IMRT plans measured by Mapcheck. The gamma pass rate was calculated using vender’s software. The field numbers were 74 and 40 for DMLC and SMLC, respectively. A random error was generated and introduced to these fields. The modified gamma pass ratemore » was calculated by comparing the original measured fluence and modified fields’ fluence. A decreasing gamma pass rate was acquired using the original gamma pass rate minus the modified gamma pass rate. Eight complexity scores were calculated in MATLAB based on the fluence and MLC sequence of these fields. The complexity scores include fractal dimension, monitor unit of field, modulation index, fluence map complexity, weighted average of field area, weighted average of field perimeter, and small aperture ratio ( <5cm{sup 2} and <50cm{sup 2}). The Spearman’s rank correlation coefficient was implemented to analyze the correlation between these scores and decreasing gamma rate. Results: The relation between the decreasing gamma pass rate and field complexity was insignificant for most complexity scores. The most significant complexity score was fluence map complexity for SMLC, which have ρ =0.4274 (p-value=0.0063). For DMLC, the most significant complex score was fractal dimension, which have ρ=−0.3068 (p-value=0.0081). Conclusions: According to the primarily Result of this study, the sensitivity gamma pass rate was not strongly relate to the field complexity.« less
MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdes, G; Scheuermann, R; Solberg, T
Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poisonmore » regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.« less
Method of producing a carbon coated ceramic membrane and associated product
Liu, Paul K. T.; Gallaher, George R.; Wu, Jeffrey C. S.
1993-01-01
A method of producing a carbon coated ceramic membrane including passing a selected hydrocarbon vapor through a ceramic membrane and controlling ceramic membrane exposure temperature and ceramic membrane exposure time. The method produces a carbon coated ceramic membrane of reduced pore size and modified surface properties having increased chemical, thermal and hydrothermal stability over an uncoated ceramic membrane.
SU-F-T-272: Patient Specific Quality Assurance of Prostate VMAT Plans with Portal Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darko, J; Osei, E; University of Waterloo, Waterloo, ON
Purpose: To evaluate the effectiveness of using the Portal Dosimetry (PD) method for patient specific quality assurance of prostate VMAT plans. Methods: As per institutional protocol all VMAT plans were measured using the Varian Portal Dosimetry (PD) method. A gamma evaluation criterion of 3%-3mm with a minimum area gamma pass rate (gamma <1) of 95% is used clinically for all plans. We retrospectively evaluated the portal dosimetry results for 170 prostate patients treated with VMAT technique. Three sets of criterions were adopted for re-evaluating the measurements; 3%-3mm, 2%-2mm and 1%-1mm. For all criterions two areas, Field+1cm and MLC-CIAO were analysed.Tomore » ascertain the effectiveness of the portal dosimetry technique in determining the delivery accuracy of prostate VMAT plans, 10 patients previously measured with portal dosimetry, were randomly selected and their measurements repeated using the ArcCHECK method. The same criterion used in the analysis of PD was used for the ArcCHECK measurements. Results: All patient plans reviewed met the institutional criteria for Area Gamma pass rate. Overall, the gamma pass rate (gamma <1) decreases for 3%-3mm, 2%-2mm and 1%-1mm criterion. For each criterion the pass rate was significantly reduced when the MLC-CIAO was used instead of FIELD+1cm. There was noticeable change in sensitivity for MLC-CIAO with 2%-2mm criteria and much more significant reduction at 1%-1mm. Comparable results were obtained for the ArcCHECK measurements. Although differences were observed between the clockwise verses the counter clockwise plans in both the PD and ArcCHECK measurements, this was not deemed to be statistically significant. Conclusion: This work demonstrates that Portal Dosimetry technique can be effectively used for quality assurance of VMAT plans. Results obtained show similar sensitivity compared to ArcCheck. To reveal certain delivery inaccuracies, the use of a combination of criterions may provide an effective way in improving the overall sensitivity of PD. Funding provided in part by the Prostate Ride for Dad, Kitchener-Waterloo, Canada.« less
Evanescent-wave and ambient chiral sensing by signal-reversing cavity ringdown polarimetry.
Sofikitis, Dimitris; Bougas, Lykourgos; Katsoprinakis, Georgios E; Spiliotis, Alexandros K; Loppinet, Benoit; Rakitzis, T Peter
2014-10-02
Detecting and quantifying chirality is important in fields ranging from analytical and biological chemistry to pharmacology and fundamental physics: it can aid drug design and synthesis, contribute to protein structure determination, and help detect parity violation of the weak force. Recent developments employ microwaves, femtosecond pulses, superchiral light or photoionization to determine chirality, yet the most widely used methods remain the traditional methods of measuring circular dichroism and optical rotation. However, these signals are typically very weak against larger time-dependent backgrounds. Cavity-enhanced optical methods can be used to amplify weak signals by passing them repeatedly through an optical cavity, and two-mirror cavities achieving up to 10(5) cavity passes have enabled absorption and birefringence measurements with record sensitivities. But chiral signals cancel when passing back and forth through a cavity, while the ubiquitous spurious linear birefringence background is enhanced. Even when intracavity optics overcome these problems, absolute chirality measurements remain difficult and sometimes impossible. Here we use a pulsed-laser bowtie cavity ringdown polarimeter with counter-propagating beams to enhance chiral signals by a factor equal to the number of cavity passes (typically >10(3)); to suppress the effects of linear birefringence by means of a large induced intracavity Faraday rotation; and to effect rapid signal reversals by reversing the Faraday rotation and subtracting signals from the counter-propagating beams. These features allow absolute chiral signal measurements in environments where background subtraction is not feasible: we determine optical rotation from α-pinene vapour in open air, and from maltodextrin and fructose solutions in the evanescent wave produced by total internal reflection at a prism surface. The limits of the present polarimeter, when using a continuous-wave laser locked to a stable, high-finesse cavity, should match the sensitivity of linear birefringence measurements (3 × 10(-13) radians), which is several orders of magnitude more sensitive than current chiral detection limits and is expected to transform chiral sensing in many fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J; Lu, B; Yan, G
Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less
SU-E-T-392: Evaluation of Ion Chamber/film and Log File Based QA to Detect Delivery Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, C; Mason, B; Kirsner, S
2015-06-15
Purpose: Ion chamber and film (ICAF) is a method used to verify patient dose prior to treatment. More recently, log file based QA has been shown as an alternative for measurement based QA. In this study, we delivered VMAT plans with and without errors to determine if ICAF and/or log file based QA was able to detect the errors. Methods: Using two VMAT patients, the original treatment plan plus 7 additional plans with delivery errors introduced were generated and delivered. The erroneous plans had gantry, collimator, MLC, gantry and collimator, collimator and MLC, MLC and gantry, and gantry, collimator, andmore » MLC errors. The gantry and collimator errors were off by 4{sup 0} for one of the two arcs. The MLC error introduced was one in which the opening aperture didn’t move throughout the delivery of the field. For each delivery, an ICAF measurement was made as well as a dose comparison based upon log files. Passing criteria to evaluate the plans were ion chamber less and 5% and film 90% of pixels pass the 3mm/3% gamma analysis(GA). For log file analysis 90% of voxels pass the 3mm/3% 3D GA and beam parameters match what was in the plan. Results: Two original plans were delivered and passed both ICAF and log file base QA. Both ICAF and log file QA met the dosimetry criteria on 4 of the 12 erroneous cases analyzed (2 cases were not analyzed). For the log file analysis, all 12 erroneous plans alerted a mismatch in delivery versus what was planned. The 8 plans that didn’t meet criteria all had MLC errors. Conclusion: Our study demonstrates that log file based pre-treatment QA was able to detect small errors that may not be detected using an ICAF and both methods of were able to detect larger delivery errors.« less
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
PROCESS FOR PRODUCING URANIUM HEXAFLUORIDE
Fowler, R.D.
1957-10-22
A process for the production of uranium hexafluoride from the oxides of uranium is reported. In accordance with the method the higher oxides of uranium may be reduced to uranium dioxide (UO/sub 2/), the latter converted into uranium tetrafluoride by reaction with hydrogen fluoride, and the UF/sub 4/ convented to UF/sub 6/ by reaction with a fluorinating agent. The UO/sub 3/ or U/sub 3/O/sub 8/ is placed in a reaction chamber in a copper boat or tray enclosed in a copper oven, and heated to 500 to 650 deg C while hydrogen gas is passed through the oven. The oven is then swept clean of hydrogen and the water vapor formed by means of nitrogen and then while continuing to maintain the temperature between 400 and 600 deg C, anhydrous hydrogen fluoride is passed through. After completion of the conversion to uranium tetrafluoride, the temperature of the reaction chamber is lowered to ahout 400 deg C, and elemental fluorine is used as the fluorinating agent for the conversion of UF/sub 4/ into UF/sub 6/. The fluorine gas is passed into the chamber, and the UF/sub 6/ formed passes out and is delivered to a condenser.
METHOD FOR THE PREPARATION OF BINARY NITROGEN-FLUORINE COMPOUNDS
Frazer, J.W.
1962-05-01
A process is given for preparing binary nitrogenfluorine compounds, in particular, tetrafluorohydrazine (N/sub 2/F/sub 4/) and difluorodiazine (N/sub 2/ F/sub 2/), The process comprises subjecting gaseous nitrogen trifluoride to the action of an alternating current electrical glow discharge in the presence of mercury vapors. By the action of the electrical discharge, the nitrogen trifluoride is converted into a gaseous product comprising a mixture of tetrafluorohydrazine, the isomers of difluorodiazine, and other impurities including nitrogen, nitrogen oxides, silicon tetrafiuoride, and unreacted nitrogen trifluoride. The gaseous products and impurities are passed into a trap maintained at about - 196 deg C to freeze out the desired products and impurities with the exception of nitregen gas which passes off from the trap and is discarded. Subsequently, the desired products and remaining impurities are warmed to the gaseous state and passed through a silica gel trap maintained at about - 55DEC, wherein the desired tetrafluorohydrazine and difluorodiazine products are retained while the remaining gaseous impurities pass therethrough. The desired products are volatilized from the silica gel trap by heating and then separated by gas chrounatography means into the respective tetrafluorohydrazine and difluorodiazine products. (A.e.C)
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Static Recrystallization Behavior of Z12CN13 Martensite Stainless Steel
NASA Astrophysics Data System (ADS)
Luo, Min; Zhou, Bing; Li, Rong-bin; Xu, Chun; Guo, Yan-hui
2017-09-01
In order to increase the hot workability and provide proper hot forming parameters of forging Z12CN13 martensite stainless steel for the simulation and production, the static recrystallization behavior has been studied by double-pass hot compression tests. The effects of deformation temperature, strain rate and inter-pass time on the static recrystallization fraction by the 2% offset method are extensively studied. The results indicate that increasing the inter-pass time and the deformation temperature as well as strain rate appropriately can increase the fraction of static recrystallization. At the temperature of 1050-1150 °C, inter-pass time of 30-100 s and strain rate of 0.1-5 s-1, the static recrystallization behavior is obvious. In addition, the kinetics of static recrystallization behavior of Z12CN13 steel has been established and the activation energy of static recrystallization is 173.030 kJ/mol. The substructure and precipitates have been studied by TEM. The results reveal that the nucleation mode is bulging at grain boundary. Undissolved precipitates such as MoNi3 and Fe3C have a retarding effect on the recrystallization kinetics. The effect is weaker than the accelerating effect of deformation temperature.
Laverty, Anthony; Mindell, Jenny; Millett, Chris
2016-01-01
Objectives. We investigated associations between having a bus pass, enabling free local bus travel across the United Kingdom for state pension–aged people, and physical activity, gait speed, and adiposity. Methods. We used data on 4650 bus pass–eligible people (aged ≥ 62 years) at wave 6 (2012–2013) of the English Longitudinal Study of Ageing in regression analyses. Results. Bus pass holders were more likely to be female (odds ratio [OR] = 1.67; 95% confidence interval [CI] = 1.38, 2.02; P < .001), retired (OR = 2.65; 95% CI = 2.10, 3.35; P < .001), without access to a car (OR = 2.78; 95% CI = 1.83, 4.21; P < .001), to use public transportation (OR = 10.26; 95% CI = 8.33, 12.64; P < .001), and to be physically active (OR = 1.43; 95% CI = 1.12, 1.84; P = .004). Female pass holders had faster gait speed (b = 0.06 meters per second; 95% CI = 0.02, 0.09; P = .001), a body mass index 1 kilogram per meter squared lower (b = –1.20; 95% CI = –1.93, –0.46; P = .001), and waist circumference 3 centimeters smaller (b = –3.32; 95% CI = –5.02, –1.62; P < .001) than women without a pass. Conclusions. Free bus travel for older people helps make transportation universally accessible, including for those at risk for social isolation. Those with a bus pass are more physically active. Among women in particular, the bus pass is associated with healthier aging. PMID:26562118
Wavenumber-domain separation of rail contribution to pass-by noise
NASA Astrophysics Data System (ADS)
Zea, Elias; Manzari, Luca; Squicciarini, Giacomo; Feng, Leping; Thompson, David; Arteaga, Ines Lopez
2017-11-01
In order to counteract the problem of railway noise and its environmental impact, passing trains in Europe must be tested in accordance to a noise legislation that demands the quantification of the noise generated by the vehicle alone. However, for frequencies between about 500 Hz and 1600 Hz, it has been found that a significant part of the measured noise is generated by the rail, which behaves like a distributed source and radiates plane waves as a result of the contact with the train's wheels. Thus the need arises for separating the rail contribution to the pass-by noise in that particular frequency range. To this end, the present paper introduces a wavenumber-domain filtering technique, referred to as wave signature extraction, which requires a line microphone array parallel to the rail, and two accelerometers on the rail in the vertical and lateral direction. The novel contributions of this research are: (i) the introduction and application of wavenumber (or plane-wave) filters to pass-by data measured with a microphone array located in the near-field of the rail, and (ii) the design of such filters without prior information of the structural properties of the rail. The latter is achieved by recording the array pressure, as well as the rail vibrations with the accelerometers, before and after the train pass-by. The performance of the proposed method is investigated with a set of pass-by measurements performed in Germany. The results seem to be promising when compared to reference data from TWINS, and the largest discrepancies occur above 1600 Hz and are attributed to plane waves radiated by the rail that so far have not been accounted for in the design of the filters.
NASA Astrophysics Data System (ADS)
Vijayanand, V. D.; Vasudevan, M.; Ganesan, V.; Parameswaran, P.; Laha, K.; Bhaduri, A. K.
2016-06-01
Creep deformation and rupture behavior of single-pass and dual-pass 316LN stainless steel (SS) weld joints fabricated by an autogenous activated tungsten inert gas welding process have been assessed by performing metallography, hardness, and conventional and impression creep tests. The fusion zone of the single-pass joint consisted of columnar zones adjacent to base metals with a central equiaxed zone, which have been modified extensively by the thermal cycle of the second pass in the dual-pass joint. The equiaxed zone in the single-pass joint, as well as in the second pass of the dual-pass joint, displayed the lowest hardness in the joints. In the dual-pass joint, the equiaxed zone of the first pass had hardness comparable to the columnar zone. The hardness variations in the joints influenced the creep deformation. The equiaxed and columnar zone in the first pass of the dual-pass joint was more creep resistant than that of the second pass. Both joints possessed lower creep rupture life than the base metal. However, the creep rupture life of the dual-pass joint was about twofolds more than that of the single-pass joint. Creep failure in the single-pass joint occurred in the central equiaxed fusion zone, whereas creep cavitation that originated in the second pass was blocked at the weld pass interface. The additional interface and strength variation between two passes in the dual-pass joint provides more restraint to creep deformation and crack propagation in the fusion zone, resulting in an increase in the creep rupture life of the dual-pass joint over the single-pass joint. Furthermore, the differences in content, morphology, and distribution of delta ferrite in the fusion zone of the joints favors more creep cavitation resistance in the dual-pass joint over the single-pass joint with the enhancement of creep rupture life.
Agreement between methods of measurement of mean aortic wall thickness by MRI.
Rosero, Eric B; Peshock, Ronald M; Khera, Amit; Clagett, G Patrick; Lo, Hao; Timaran, Carlos
2009-03-01
To assess the agreement between three methods of calculation of mean aortic wall thickness (MAWT) using magnetic resonance imaging (MRI). High-resolution MRI of the infrarenal abdominal aorta was performed on 70 subjects with a history of coronary artery disease who were part of a multi-ethnic population-based sample. MAWT was calculated as the mean distance between the adventitial and luminal aortic boundaries using three different methods: average distance at four standard positions (AWT-4P), average distance at 100 automated positions (AWT-100P), and using a mathematical computation derived from the total vessel and luminal areas (AWT-VA). Bland-Altman plots and Passing-Bablok regression analyses were used to assess agreement between methods. Bland-Altman analyses demonstrated a positive bias of 3.02+/-7.31% between the AWT-VA and the AWT-4P methods, and of 1.76+/-6.82% between the AWT-100P and the AWT-4P methods. Passing-Bablok regression analyses demonstrated constant bias between the AWT-4P method and the other two methods. Proportional bias was, however, not evident among the three methods. MRI methods of measurement of MAWT using a limited number of positions of the aortic wall systematically underestimate the MAWT value compared with the method that calculates MAWT from the vessel areas. Copyright (c) 2009 Wiley-Liss, Inc.
PyPanda: a Python package for gene regulatory network reconstruction
van IJzendoorn, David G.P.; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L.
2016-01-01
Summary: PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of ‘omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. Availability and implementation: The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda. Contact: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl PMID:27402905
PyPanda: a Python package for gene regulatory network reconstruction.
van IJzendoorn, David G P; Glass, Kimberly; Quackenbush, John; Kuijjer, Marieke L
2016-11-01
PANDA (Passing Attributes between Networks for Data Assimilation) is a gene regulatory network inference method that uses message-passing to integrate multiple sources of 'omics data. PANDA was originally coded in C ++. In this application note we describe PyPanda, the Python version of PANDA. PyPanda runs considerably faster than the C ++ version and includes additional features for network analysis. The open source PyPanda Python package is freely available at http://github.com/davidvi/pypanda CONTACT: mkuijjer@jimmy.harvard.edu or d.g.p.van_ijzendoorn@lumc.nl. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Vermeer, M.
1981-07-01
A program was designed to replace AIMLASER for the generation of aiming predictions, to achieve a major saving in computing time, and to keep the program small enough for use even on small systems. An approach was adopted that incorporated the numerical integration of the orbit through a pass, limiting the computation of osculating elements to only one point per pass. The numerical integration method which is fourth order in delta t in the cumulative error after a given time lapse is presented. Algorithms are explained and a flowchart and listing of the program are provided.
Correlation of coastal water turbidity and current circulation with ERTS-1 and Skylab imagery
NASA Technical Reports Server (NTRS)
Klemas, V.; Otley, M.; Philpot, W.; Wethe, C.; Rogers, R.; Shah, N.
1974-01-01
The article reviews investigations of current circulation patterns, suspended sediment concentration, coastal frontal systems, and waste disposal plumes based on visual interpretation and digital analysis of ERTS-1 and Skylab/EREP imagery. Data on conditions in the Delaware Bay area were obtained from 10 ERTS-1 passes and one Skylab pass, with simultaneous surface and airborne sensing. The current patterns and sediments observed by ERTS-1 correlated well with ground-based observations. Methods are suggested which would make it possible to identify certain pollutants and sediment types from multispectral scanner data.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
Schmieding, E.G.; Ruehle, A.E.
1961-04-11
A method is given for extracting metal values from an aqueous feed wherein the aqueous feed is passed countercurrent to an organic extractant through a plurality of decanting zones and a portion of the mixture contained in each decanting zone is recycled through a mixing zone associated therewith. The improvement consists of passing more solvent from the top of one decanting zone to the bottom of the preceding decanting zone than can rise to the top thereof and recycling that portion of the solvent that does not rise to the top back to the first named decanting zone through its associated mixing zone.
Groh, Edward F.; Cassidy, Dale A.
1978-01-01
A thermocouple lead or other small diameter wire, cable or tube is passed through a thin material such as sheet metal and sealed thereinto by drawing complementary longitudinally angled, laterally rounded grooves terminating at their base ends in a common plane in both sides of the thin material with shearing occuring at the deep end faces thereof to form a rounded opening in the thin material substantially perpendicular to the plane of the thin material, passing a thermocouple lead or similar object through the opening so formed and sealing the opening with a sealant which simultaneously bonds the lead to the thin material.
PLAB and UK graduates' performance on MRCP(UK) and MRCGP examinations: data linkage study.
McManus, I C; Wakeford, Richard
2014-04-17
To assess whether international medical graduates passing the two examinations set by the Professional and Linguistic Assessments Board (PLAB1 and PLAB2) of the General Medical Council (GMC) are equivalent to UK graduates at the end of the first foundation year of medical training (F1), as the GMC requires, and if not, to assess what changes in the PLAB pass marks might produce equivalence. Data linkage of GMC PLAB performance data with data from the Royal Colleges of Physicians and the Royal College of General Practitioners on performance of PLAB graduates and UK graduates at the MRCP(UK) and MRCGP examinations. Doctors in training for internal medicine or general practice in the United Kingdom. 7829, 5135, and 4387 PLAB graduates on their first attempt at MRCP(UK) Part 1, Part 2, and PACES assessments from 2001 to 2012 compared with 18,532, 14,094, and 14,376 UK graduates taking the same assessments; 3160 PLAB1 graduates making their first attempt at the MRCGP AKT during 2007-12 compared with 14,235 UK graduates; and 1411 PLAB2 graduates making their first attempt at the MRCGP CSA during 2010-12 compared with 6935 UK graduates. Performance at MRCP(UK) Part 1, Part 2, and PACES assessments, and MRCGP AKT and CSA assessments in relation to performance on PLAB1 and PLAB2 assessments, as well as to International English Language Testing System (IELTS) scores. MRCP(UK), MRCGP, and PLAB results were analysed as marks relative to the pass mark at the first attempt. PLAB1 marks were a valid predictor of MRCP(UK) Part 1, MRCP(UK) Part 2, and MRCGP AKT (r=0.521, 0.390, and 0.490; all P<0.001). PLAB2 marks correlated with MRCP(UK) PACES and MRCGP CSA (r=0.274, 0.321; both P<0.001). PLAB graduates had significantly lower MRCP(UK) and MRCGP assessments (Glass's Δ=0.94, 0.91, 1.40, 1.01, and 1.82 for MRCP(UK) Part 1, Part 2, and PACES and MRCGP AKT and CSA), and were more likely to fail assessments and to progress more slowly than UK medical graduates. IELTS scores correlated significantly with later performance, multiple regression showing that the effect of PLAB1 (β=0.496) was much stronger than the effect of IELTS (β=0.086). Changes to PLAB pass marks that would result in international medical graduate and UK medical graduate equivalence were assessed in two ways. Method 1 adjusted PLAB pass marks to equate median performance of PLAB and UK graduates. Method 2 divided PLAB graduates into 12 equally spaced groups according to PLAB performance, and compared these with mean performance of graduates from individual UK medical schools, assessing which PLAB groups were equivalent in MRCP(UK) and MRCGP performance to UK graduates. The two methods produced similar results. To produce equivalent performance on the MRCP and MRGP examinations, the pass mark for PLAB1 would require raising by about 27 marks (13%) and for PLAB2 by about 15-16 marks (20%) above the present standard. PLAB is a valid assessment of medical knowledge and clinical skills, correlating well with performance at MRCP(UK) and MRCGP. PLAB graduates' knowledge and skills at MRCP(UK) and MRCGP are over one standard deviation below those of UK graduates, although differences in training quality cannot be taken into account. Equivalent performance in MRCGP(UK) and MRCGP would occur if the pass marks of PLAB1 and PLAB2 were raised considerably, but that would also reduce the pass rate, with implications for medical workforce planning. Increasing IELTS requirements would have less impact on equivalence than raising PLAB pass marks.
PLAB and UK graduates’ performance on MRCP(UK) and MRCGP examinations: data linkage study
Wakeford, Richard
2014-01-01
Objectives To assess whether international medical graduates passing the two examinations set by the Professional and Linguistic Assessments Board (PLAB1 and PLAB2) of the General Medical Council (GMC) are equivalent to UK graduates at the end of the first foundation year of medical training (F1), as the GMC requires, and if not, to assess what changes in the PLAB pass marks might produce equivalence. Design Data linkage of GMC PLAB performance data with data from the Royal Colleges of Physicians and the Royal College of General Practitioners on performance of PLAB graduates and UK graduates at the MRCP(UK) and MRCGP examinations. Setting Doctors in training for internal medicine or general practice in the United Kingdom. Participants 7829, 5135, and 4387 PLAB graduates on their first attempt at MRCP(UK) Part 1, Part 2, and PACES assessments from 2001 to 2012 compared with 18 532, 14 094, and 14 376 UK graduates taking the same assessments; 3160 PLAB1 graduates making their first attempt at the MRCGP AKT during 2007-12 compared with 14 235 UK graduates; and 1411 PLAB2 graduates making their first attempt at the MRCGP CSA during 2010-12 compared with 6935 UK graduates. Main outcome measures Performance at MRCP(UK) Part 1, Part 2, and PACES assessments, and MRCGP AKT and CSA assessments in relation to performance on PLAB1 and PLAB2 assessments, as well as to International English Language Testing System (IELTS) scores. MRCP(UK), MRCGP, and PLAB results were analysed as marks relative to the pass mark at the first attempt. Results PLAB1 marks were a valid predictor of MRCP(UK) Part 1, MRCP(UK) Part 2, and MRCGP AKT (r=0.521, 0.390, and 0.490; all P<0.001). PLAB2 marks correlated with MRCP(UK) PACES and MRCGP CSA (r=0.274, 0.321; both P<0.001). PLAB graduates had significantly lower MRCP(UK) and MRCGP assessments (Glass’s Δ=0.94, 0.91, 1.40, 1.01, and 1.82 for MRCP(UK) Part 1, Part 2, and PACES and MRCGP AKT and CSA), and were more likely to fail assessments and to progress more slowly than UK medical graduates. IELTS scores correlated significantly with later performance, multiple regression showing that the effect of PLAB1 (β=0.496) was much stronger than the effect of IELTS (β=0.086). Changes to PLAB pass marks that would result in international medical graduate and UK medical graduate equivalence were assessed in two ways. Method 1 adjusted PLAB pass marks to equate median performance of PLAB and UK graduates. Method 2 divided PLAB graduates into 12 equally spaced groups according to PLAB performance, and compared these with mean performance of graduates from individual UK medical schools, assessing which PLAB groups were equivalent in MRCP(UK) and MRCGP performance to UK graduates. The two methods produced similar results. To produce equivalent performance on the MRCP and MRGP examinations, the pass mark for PLAB1 would require raising by about 27 marks (13%) and for PLAB2 by about 15-16 marks (20%) above the present standard. Conclusions PLAB is a valid assessment of medical knowledge and clinical skills, correlating well with performance at MRCP(UK) and MRCGP. PLAB graduates’ knowledge and skills at MRCP(UK) and MRCGP are over one standard deviation below those of UK graduates, although differences in training quality cannot be taken into account. Equivalent performance in MRCGP(UK) and MRCGP would occur if the pass marks of PLAB1 and PLAB2 were raised considerably, but that would also reduce the pass rate, with implications for medical workforce planning. Increasing IELTS requirements would have less impact on equivalence than raising PLAB pass marks. PMID:24742473
Cluster analysis of multiple planetary flow regimes
NASA Technical Reports Server (NTRS)
Mo, Kingtse; Ghil, Michael
1988-01-01
A modified cluster analysis method developed for the classification of quasi-stationary events into a few planetary flow regimes and for the examination of transitions between these regimes is described. The method was applied first to a simple deterministic model and then to a 500-mbar data set for Northern Hemisphere (NH), for which cluster analysis was carried out in the subspace of the first seven empirical orthogonal functions (EOFs). Stationary clusters were found in the low-frequency band of more than 10 days, while transient clusters were found in the band-pass frequency window between 2.5 and 6 days. In the low-frequency band, three pairs of clusters determined EOFs 1, 2, and 3, respectively; they exhibited well-known regional features, such as blocking, the Pacific/North American pattern, and wave trains. Both model and low-pass data exhibited strong bimodality.
Method And Apparatus For Detecting Chemical Binding
Warner, Benjamin P.; Havrilla, George J.; Miller, Thomasin C.; Wells, Cyndi A.
2005-02-22
The method for screening binding between a target binder and potential pharmaceutical chemicals involves sending a solution (preferably an aqueous solution) of the target binder through a conduit to a size exclusion filter, the target binder being too large to pass through the size exclusion filter, and then sending a solution of one or more potential pharmaceutical chemicals (preferably an aqueous solution) through the same conduit to the size exclusion filter after target binder has collected on the filter. The potential pharmaceutical chemicals are small enough to pass through the filter. Afterwards, x-rays are sent from an x-ray source to the size exclusion filter, and if the potential pharmaceutical chemicals form a complex with the target binder, the complex produces an x-ray fluorescence signal having an intensity that indicates that a complex has formed.
Method and apparatus for detecting chemical binding
Warner, Benjamin P [Los Alamos, NM; Havrilla, George J [Los Alamos, NM; Miller, Thomasin C [Los Alamos, NM; Wells, Cyndi A [Los Alamos, NM
2007-07-10
The method for screening binding between a target binder and potential pharmaceutical chemicals involves sending a solution (preferably an aqueous solution) of the target binder through a conduit to a size exclusion filter, the target binder being too large to pass through the size exclusion filter, and then sending a solution of one or more potential pharmaceutical chemicals (preferably an aqueous solution) through the same conduit to the size exclusion filter after target binder has collected on the filter. The potential pharmaceutical chemicals are small enough to pass through the filter. Afterwards, x-rays are sent from an x-ray source to the size exclusion filter, and if the potential pharmaceutical chemicals form a complex with the target binder, the complex produces an x-ray fluorescence signal having an intensity that indicates that a complex has formed.
Two-stage dehydration of sugars
Holladay, Johnathan E [Kennewick, WA; Hu, Jianli [Kennewick, WA; Wang, Yong [Richland, WA; Werpy, Todd A [West Richland, WA
2009-11-10
The invention includes methods for producing dianhydrosugar alcohol by providing an acid catalyst within a reactor and passing a starting material through the reactor at a first temperature. At least a portion of the staring material is converted to a monoanhydrosugar isomer during the passing through the column. The monoanhydrosugar is subjected to a second temperature which is greater than the first to produce a dianhydrosugar. The invention includes a method of producing isosorbide. An initial feed stream containing sorbitol is fed into a continuous reactor containing an acid catalyst at a temperature of less than 120.degree. C. The residence time for the reactor is less than or equal to about 30 minutes. Sorbitol converted to 1,4-sorbitan in the continuous reactor is subsequently provided to a second reactor and is dehydrated at a temperature of at least 120.degree. C. to produce isosorbide.
Monitoring the soot emissions of passing cars.
Kurniawan, A; Schmidt-Ott, A
2006-03-15
We report on the first application of a novel fast on-road sensing method for measurement of particulate emissions of individual passing passenger cars. The studywas motivated by the shift of interest from gases to particles in connection with strong adverse health effects. The results correspond very much to findings by Beaton et al. (Science, May 19,1995) for gaseous hydrocarbon and CO emissions: A small percentage of "superpolluters" (here 5%) account for a high percentage (here 43%) of the pollution (here elemental carbon). We estimate that up to 50% of the particulate emissions of vehicles could be avoided on the basis of the present legislation, if on-road monitoring would be applied to enforce maintenance. Our fast sensing method for particles is based on photoelectron emission from the emitted airborne soot particles in combination with a CO2 sensor delivering a reference.
Freeze chromatography method and apparatus
Scott, C.D.
1987-04-16
A freeze chromatography method and apparatus are provided which enable separation of the solutes contained in a sample. The apparatus includes an annular column construction comprising cylindrical inner and outer surfaces defining an annular passage therebetween. One of the surfaces is heated and the other cooled while passing an eluent through the annular passageway so that the eluent in contact with the cooled surface freezes and forms a frozen eluent layer thereon. A mixture of solutes dissolved in eluent is passed through the annular passageway in contact with the frozen layer so that the sample solutes in the mixture will tend to migrate either toward or away the frozen layer. The rate at which the mixture flows through the annular passageway is controlled so that the distribution of the sample solutes approaches that at equilibrium and thus a separation between the sample solutes occurs. 3 figs.
Judgmental Standard Setting Using a Cognitive Components Model.
ERIC Educational Resources Information Center
McGinty, Dixie; Neel, John H.
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
ERIC Educational Resources Information Center
Grimes, Catherine Leimkuhler; White, Harold B., III
2015-01-01
There are barriers to adoption of research-based teaching methods. Professional development workshops may inform faculty of these methods, but effective adoption often does not follow. In addition, newly-minted research-active faculty are often overwhelmed by the many new responsibilities (grant writing, group management, laboratory setup,…
The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes
USDA-ARS?s Scientific Manuscript database
Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...
ERIC Educational Resources Information Center
Palka, Sean
2015-01-01
This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…
Method of Obtaining Uniform Coatings on Graphite
Campbell, I. E.
1961-04-01
A method is given for obtaining uniform carbide coatings on graphite bodies. According to the invention a metallic halide in vapor form is passed over the graphite body under such conditions of temperature and pressure that the halide reacts with the graphite to form a coating of the metal carbide on the surface of the graphite.
METHOD OF OBTAINING UNIFORM COATINGS ON GRAPHITE
Campbell, I.E.
1961-04-01
A method is given for obtaining uniform carbide coatings on graphite bodies. According to the invention a metallic halide in vapor form is passed over the graphite body under such conditions of temperature and pressure that the halide reacts with the graphite to form a coating of the metal carbide on the surface of the graphite.
Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic. PMID:28245222
Wu, Jibing; Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic.
Tojo, H; Yamada, I; Yasuhara, R; Ejiri, A; Hiratsuka, J; Togashi, H; Yatsuka, E; Hatae, T; Funaba, H; Hayashi, H; Takase, Y; Itami, K
2016-09-01
This paper evaluates the accuracy of electron temperature measurements and relative transmissivities of double-pass Thomson scattering diagnostics. The electron temperature (T e ) is obtained from the ratio of signals from a double-pass scattering system, then relative transmissivities are calculated from the measured T e and intensity of the signals. How accurate the values are depends on the electron temperature (T e ) and scattering angle (θ), and therefore the accuracy of the values was evaluated experimentally using the Large Helical Device (LHD) and the Tokyo spherical tokamak-2 (TST-2). Analyzing the data from the TST-2 indicates that a high T e and a large scattering angle (θ) yield accurate values. Indeed, the errors for scattering angle θ = 135° are approximately half of those for θ = 115°. The method of determining the T e in a wide T e range spanning over two orders of magnitude (0.01-1.5 keV) was validated using the experimental results of the LHD and TST-2. A simple method to provide relative transmissivities, which include inputs from collection optics, vacuum window, optical fibers, and polychromators, is also presented. The relative errors were less than approximately 10%. Numerical simulations also indicate that the T e measurements are valid under harsh radiation conditions. This method to obtain T e can be considered for the design of Thomson scattering systems where there is high-performance plasma that generates harsh radiation environments.
Pang, Shaoning; Ban, Tao; Kadobayashi, Youki; Kasabov, Nikola K
2012-04-01
To adapt linear discriminant analysis (LDA) to real-world applications, there is a pressing need to equip it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting. These provide the benefits of ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage costs than traditional approaches under common application conditions. These properties are validated by experiments on a benchmark face image data set. By a case study on the application of the proposed method to multiagent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.
Estimation of Time Scales in Unsteady Flows in a Turbomachinery Rig
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.
2004-01-01
Time scales in turbulent and transitional flow provide a link between experimental data and modeling, both in terms of physical content and for quantitative assessment. The problem of interest here is the definition of time scales in an unsteady flow. Using representative samples of data from GEAE low pressure turbine experiment in low speed research turbine facility with wake-induced transition, we document several methods to extract dominant frequencies, and compare the results. We show that conventional methods of time scale evaluation (based on autocorrelation functions and on Fourier spectra) and wavelet-based methods provide similar information when applied to stationary signals. We also show the greater flexibility of the wavelet-based methods when dealing with intermittent or strongly modulated data, as are encountered in transitioning boundary layers and in flows with unsteady forcing associated with wake passing. We define phase-averaged dominant frequencies that characterize the turbulence associated with freestream conditions and with the passing wakes downstream of a rotor. The relevance of these results for modeling is discussed in the paper.
Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria
Magesh, R.; George Priya Doss, C.
2012-01-01
A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria. PMID:22606059
Maza, Itay; Caspi, Inbal; Zviran, Asaf; Chomsky, Elad; Rais, Yoach; Viukov, Sergey; Geula, Shay; Buenrostro, Jason D; Weinberger, Leehee; Krupalnik, Vladislav; Hanna, Suhair; Zerbib, Mirie; Dutton, James R; Greenleaf, William J; Massarwa, Rada; Novershtern, Noa; Hanna, Jacob H
2015-07-01
Somatic cells can be transdifferentiated to other cell types without passing through a pluripotent state by ectopic expression of appropriate transcription factors. Recent reports have proposed an alternative transdifferentiation method in which fibroblasts are directly converted to various mature somatic cell types by brief expression of the induced pluripotent stem cell (iPSC) reprogramming factors Oct4, Sox2, Klf4 and c-Myc (OSKM) followed by cell expansion in media that promote lineage differentiation. Here we test this method using genetic lineage tracing for expression of endogenous Nanog and Oct4 and for X chromosome reactivation, as these events mark acquisition of pluripotency. We show that the vast majority of reprogrammed cardiomyocytes or neural stem cells obtained from mouse fibroblasts by OSKM-induced 'transdifferentiation' pass through a transient pluripotent state, and that their derivation is molecularly coupled to iPSC formation mechanisms. Our findings underscore the importance of defining trajectories during cell reprogramming by various methods.
Maza, Itay; Caspi, Inbal; Zviran, Asaf; Chomsky, Elad; Rais, Yoach; Viukov, Sergey; Geula, Shay; Buenrostro, Jason D.; Weinberger, Leehee; Krupalnik, Vladislav; Hanna, Suhair; Zerbib, Mirie; Dutton, James R.; Greenleaf, William J.; Massarwa, Rada; Novershtern, Noa; Hanna, Jacob H.
2015-01-01
Somatic cells can be transdifferentiated to other cell types without passing through a pluripotent state by ectopic expression of appropriate transcription factors1,2. Recent reports have proposed an alternative transdifferentiation method in which fibroblasts are directly converted to various mature somatic cell types by brief expression of the induced pluripotent stem cell (iPSC) reprogramming factors Oct4, Sox2, Klf4 and c-Myc (OSKM) followed by cell expansion in media that promote lineage differentiation3–6. Here we test this method using genetic lineage tracing for expression of endogenous Nanog and Oct4 and for X chromosome reactivation, as these events mark acquisition of pluripotency. We show that the vast majority of reprogrammed cardiomyocytes or neural stem cells obtained from mouse fibroblasts by OSKM-induced transdifferentiation pass through a transient pluripotent state, and that their derivation is molecularly coupled to iPSC formation mechanisms. Our findings underscore the importance of defining trajectories during cell reprogramming by different methods. PMID:26098448
Method of separating short half-life radionuclides from a mixture of radionuclides
Bray, Lane A.; Ryan, Jack L.
1999-01-01
The present invention is a method of removing an impurity of plutonium, lead or a combination thereof from a mixture of radionuclides that contains the impurity and at least one parent radionuclide. The method has the steps of (a) insuring that the mixture is a hydrochloric acid mixture; (b) oxidizing the acidic mixture and specifically oxidizing the impurity to its highest oxidation state; and (c) passing the oxidized mixture through a chloride form anion exchange column whereupon the oxidized impurity absorbs to the chloride form anion exchange column and the 22.sup.9 Th or 2.sup.27 Ac "cow" radionuclide passes through the chloride form anion exchange column. The plutonium is removed for the purpose of obtaining other alpha emitting radionuclides in a highly purified form suitable for medical therapy. In addition to plutonium; lead, iron, cobalt, copper, uranium, and other metallic cations that form chloride anionic complexes that may be present in the mixture; are removed from the mixture on the chloride form anion exchange column.
Method of separating short half-life radionuclides from a mixture of radionuclides
Bray, L.A.; Ryan, J.L.
1999-03-23
The present invention is a method of removing an impurity of plutonium, lead or a combination thereof from a mixture of radionuclides that contains the impurity and at least one parent radionuclide. The method has the steps of (a) insuring that the mixture is a hydrochloric acid mixture; (b) oxidizing the acidic mixture and specifically oxidizing the impurity to its highest oxidation state; and (c) passing the oxidized mixture through a chloride form anion exchange column whereupon the oxidized impurity absorbs to the chloride form anion exchange column and the {sup 229}Th or {sup 227}Ac ``cow`` radionuclide passes through the chloride form anion exchange column. The plutonium is removed for the purpose of obtaining other alpha emitting radionuclides in a highly purified form suitable for medical therapy. In addition to plutonium, lead, iron, cobalt, copper, uranium, and other metallic cations that form chloride anionic complexes that may be present in the mixture are removed from the mixture on the chloride form anion exchange column. 8 figs.
Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun
2018-01-01
Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391
Oud, Lavi
2016-06-10
BACKGROUND The reported mortality among women with pregnancy-associated severe sepsis (PASS) has been considerably lower than among severely septic patients in the general population, with the difference being attributed to the younger age and lack of chronic illness among the women with PASS. However, no comparative studies were reported to date between patients with PASS and age-similar women with severe sepsis not associated with pregnancy (NPSS). MATERIAL AND METHODS We used the Texas Inpatient Public Use Data File to compare the crude and adjusted hospital mortality between women with severe sepsis, aged 20-34 years, with and without pregnancy-associated hospitalizations during 2001-2010, following exclusion of those with reported chronic comorbidities, as well as alcohol and drug abuse. RESULTS Crude hospital mortality among PASS vs. NPSS hospitalizations was lower for the whole cohort (6.7% vs. 14.1% [p<0.0001]) and those with ≥3 organ failures (17.6% vs. 33.2% [p=0.0100]). Adjusted PASS mortality (odds ratio [95% CI]) was 0.57 (0.38-0.86) [p=0.0070]. CONCLUSIONS Hospital mortality was unexpectedly markedly and consistently lower among women with severe sepsis associated with pregnancy, as compared with contemporaneous, age-similar women with severe sepsis not associated with pregnancy, without reported chronic comorbidities. Further studies are warranted to examine the sources of the observed differences and to corroborate our findings.
Oud, Lavi
2016-01-01
Background The reported mortality among women with pregnancy-associated severe sepsis (PASS) has been considerably lower than among severely septic patients in the general population, with the difference being attributed to the younger age and lack of chronic illness among the women with PASS. However, no comparative studies were reported to date between patients with PASS and age-similar women with severe sepsis not associated with pregnancy (NPSS). Material/Methods We used the Texas Inpatient Public Use Data File to compare the crude and adjusted hospital mortality between women with severe sepsis, aged 20–34 years, with and without pregnancy-associated hospitalizations during 2001–2010, following exclusion of those with reported chronic comorbidities, as well as alcohol and drug abuse. Results Crude hospital mortality among PASS vs. NPSS hospitalizations was lower for the whole cohort (6.7% vs. 14.1% [p<0.0001]) and those with ≥3 organ failures (17.6% vs. 33.2% [p=0.0100]). Adjusted PASS mortality (odds ratio [95% CI]) was 0.57 (0.38–0.86) [p=0.0070]. Conclusions Hospital mortality was unexpectedly markedly and consistently lower among women with severe sepsis associated with pregnancy, as compared with contemporaneous, age-similar women with severe sepsis not associated with pregnancy, without reported chronic comorbidities. Further studies are warranted to examine the sources of the observed differences and to corroborate our findings. PMID:27286326
Method of producing a carbon coated ceramic membrane and associated product
Liu, P.K.T.; Gallaher, G.R.; Wu, J.C.S.
1993-11-16
A method is described for producing a carbon coated ceramic membrane including passing a selected hydrocarbon vapor through a ceramic membrane and controlling ceramic membrane exposure temperature and ceramic membrane exposure time. The method produces a carbon coated ceramic membrane of reduced pore size and modified surface properties having increased chemical, thermal and hydrothermal stability over an uncoated ceramic membrane. 12 figures.
Method and system for continuous atomic layer deposition
Elam, Jeffrey W.; Yanguas-Gil, Angel; Libera, Joseph A.
2017-03-21
A system and method for continuous atomic layer deposition. The system and method includes a housing, a moving bed which passes through the housing, a plurality of precursor gases and associated input ports and the amount of precursor gases, position of the input ports, and relative velocity of the moving bed and carrier gases enabling exhaustion of the precursor gases at available reaction sites.
NASA Technical Reports Server (NTRS)
Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.
1993-01-01
A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.
Percutaneous transgastric computed tomography-guided biopsy of the pancreas using large needles
Tseng, Hsiuo-Shan; Chen, Chia-Yuen; Chan, Wing P; Chiang, Jen-Huey
2009-01-01
AIM: To assess the safety, yield and clinical utility of percutaneous transgastric computed tomography (CT)-guided biopsy of pancreatic tumor using large needles, in selected patients. METHODS: We reviewed 34 CT-guided biopsies in patients with pancreas mass, of whom 24 (71%) had a direct path to the mass without passing through a major organ. The needle passed through the liver in one case (3%). Nine passes (26%) were made through the stomach. These nine transgastric biopsies which used a coaxial technique (i.e. a 17-gauge coaxial introducer needle and an 18-gauge biopsy needle) were the basis of this study. Immediate and late follow-up CT images to detect complications were obtained. RESULTS: Tumor tissues were obtained in nine pancreatic biopsies, and histologic specimens for diagnosis were obtained in all cases. One patient, who had a rare sarcomatoid carcinoma, received a second biopsy. One patient had a complication of transient pneumoperitoneum but no subjective complaints. An immediate imaging study and clinical follow-up detected neither hemorrhage nor peritonitis. No delayed procedure-related complication was seen during the survival period of our patients. CONCLUSION: Pancreatic biopsy can be obtained by a transgastric route using a large needle as an alternative method, without complications of peritonitis or bleeding. PMID:20014462
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Tang, Tricia S.; Funnell, Martha M.; Gillard, Marylou; Nwankwo, Robin; Heisler, Michele
2013-01-01
Objective This study determined the feasibility of training adults with diabetes to lead diabetes self-management support (DSMS) interventions, examined whether participants can achieve the criteria required for successful graduation, and assessed perceived efficacy of and satisfaction with the peer leader training (PLT) program. Methods We recruited nine African-American adults with diabetes for a 46-hour PLT pilot program conducted over 12 weeks. The program utilized multiple instructional methods, reviewed key diabetes education content areas, and provided communication, facilitation, and behavior change skills training. Participants were given three attempts to achieve the pre-established competency criteria for diabetes knowledge, empowerment-based facilitation, active listening, and self-efficacy. Results On the first attempt 75%, 75%, 63%, and 75% passed diabetes knowledge, empowerment-based facilitation, active listening, and self-efficacy, respectively. Those participants who did not pass on first attempt passed on the second attempt. Participants were highly satisfied with the program length, balance between content and skills development, and preparation for leading support activities. Conclusion Findings suggest that it is feasible to train and graduate peer leaders with the necessary knowledge and skills to facilitate DSMS interventions. Practical Implications With proper training, peer support may be a viable model for translating and sustaining DSMS interventions into community-based settings. PMID:21292425
Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.
1997-01-01
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.
Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.
1997-10-14
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.
Method of frequency dependent correlations: investigating the variability of total solar irradiance
NASA Astrophysics Data System (ADS)
Pelt, J.; Käpylä, M. J.; Olspert, N.
2017-04-01
Context. This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites. Aims: We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain. Methods: Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves. Results: We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum. A significant part of the variability can be explained by a very simple model consisting of two components: the original proxy describing blanketing by sunspots, and the low-pass-filtered curve describing the overall activity level. The models with the full library of the different building blocks can be applied to hindcasting with a high level of confidence, Rc ≈ 0.90. The usefulness of these models is limited by the major target controversy. Conclusions: The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.
Stable method for estimation of laser ranging
NASA Astrophysics Data System (ADS)
Kurbasova, G. S.; Rykhlova, L. V.
A noise-immunity variant of the least squares method was developed for the preliminary analysis of laser-ranging data. The method takes into account the influence of many physical phenomena accompanying the generation of the laser-ranging signals, their passing through the optical channel, the distribution in the atmosphere, the scattering on the corner reflector, and their registration. The method was demonstrated on the example of Lageos observations made with the Intercosmos laser radar.
Single-pass memory system evaluation for multiprogramming workloads
NASA Technical Reports Server (NTRS)
Conte, Thomas M.; Hwu, Wen-Mei W.
1990-01-01
Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorny, M.; Rebicek, J.; Klemes, J.
2015-10-15
This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axismore » of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time.« less
De Champlain, Andre F; Boulais, Andre-Philippe; Dallas, Andrew
2016-01-01
The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada's Qualifying Examination Part I (MCCQEI) based on item response theory. Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations.
Tang, T S; Sohal, P S; Garg, A K
2013-06-01
The purpose of this single-cohort study was to implement and evaluate a programme that trains peers to deliver a diabetes self-management support programme for South-Asian adults with Type 2 diabetes and to assess the perceived efficacy of and satisfaction with this programme. We recruited eight South-Asian adults who completed a 20-h peer-leader training programme conducted over five sessions (4 h per session). The programme used multiple instructional methods (quizzes, group brainstorming, skill building, group sharing, role-play and facilitation simulation) and provided communication, facilitation, and behaviour change skills training. To graduate, participants were required to achieve the pre-established competency criteria in four training domains: active listening, empowerment-based facilitation, five-step behavioural goal-setting, and self-efficacy. Participants were given three attempts to pass each competency domain. On the first attempt six (75%), eight (100%), five (63%) and five (63%) participants passed active listening, empowerment-based facilitation, five-step behavioural goal-setting, and self-efficacy, respectively. Those participants who did not pass a competency domain on the first attempt were successful in passing on the second attempt. As a result, all eight participants graduated from the training programme and became peer leaders. Satisfaction ratings for programme length, balance between content and skills development, and preparation for leading support activities were uniformly high. Ratings for the instructional methods ranged between effective and very effective. Findings suggest it is feasible to train and graduate peer leaders with the necessary skills to facilitate a diabetes self-management support intervention. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.
DOT National Transportation Integrated Search
1976-08-01
Fare prepayment encompasses all methods of paying for transit rides other than by cash, namely, tickets, tokens, punch cards, passes, and permits. The purpose of this study is the examination of the overall ridership and revenue impacts of ongoing an...
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A
2011-04-05
Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2014-12-04
The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.
Tao, Jinyuan; Gunter, Glenda; Tsai, Ming-Hsiu; Lim, Dan
2016-01-01
Recently, the many robust learning management systems, and the availability of affordable laptops, have made secure laptop-based testing a reality on many campuses. The undergraduate nursing program at the authors' university began to implement a secure laptop-based testing program in 2009, which allowed students to use their newly purchased laptops to take quizzes and tests securely in classrooms. After nearly 5 years' secure laptop-based testing program implementation, a formative evaluation, using a mixed method that has both descriptive and correlational data elements, was conducted to seek constructive feedback from students to improve the program. Evaluation data show that, overall, students (n = 166) believed the secure laptop-based testing program helps them get hands-on experience of taking examinations on the computer and gets them prepared for their computerized NCLEX-RN. Students, however, had a lot of concerns about laptop glitches and campus wireless network glitches they experienced during testing. At the same time, NCLEX-RN first-time passing rate data were analyzed using the χ2 test, and revealed no significant association between the two testing methods (paper-and-pencil testing and the secure laptop-based testing) and students' first-time NCLEX-RN passing rate. Based on the odds ratio, however, the odds of students passing NCLEX-RN the first time was 1.37 times higher if they were taught with the secure laptop-based testing method than if taught with the traditional paper-and-pencil testing method in nursing school. It was recommended to the institution that better quality of laptops needs to be provided to future students, measures needed to be taken to further stabilize the campus wireless Internet network, and there was a need to reevaluate the Laptop Initiative Program.
Sutton, S C; Rinaldi, M T; Vukovinsky, K E
2001-01-01
This study was undertaken to determine whether the gravimetric method provided an accurate measure of water flux correction and to compare the gravimetric method with methods that employ nonabsorbed markers (eg, phenol red and 14C-PEG-3350). Phenol red,14C-PEG-3350, and 4-[2-[[2-(6-amino-3-pyridinyl)-2-hydroxyethyl]amino]ethoxy]-, methyl ester, (R)-benzene acetic acid (Compound I) were co-perfused in situ through the jejunum of 9 anesthetized rats (single-pass intestinal perfusion [SPIP]). Water absorption was determined from the phenol red,14C-PEG-3350, and gravimetric methods. The absorption rate constant (ka) for Compound I was calculated. Both phenol red and 14C-PEG-3350 were appreciably absorbed, underestimating the extent of water flux in the SPIP model. The average +/- SD water flux microg/h/cm) for the 3 methods were 68.9 +/- 28.2 (gravimetric), 26.8 +/- 49.2 (phenol red), and 34.9 +/- 21.9 (14C-PEG-3350). The (average +/- SD) ka for Compound I (uncorrected for water flux) was 0.024 +/- 0.005 min(-1). For the corrected, gravimetric method, the average +/- SD was 0.031 +/- 0.001 min(-1). The gravimetric method for correcting water flux was as accurate as the 2 "nonabsorbed" marker methods.
Diamond film growth from fullerene precursors
Gruen, Dieter M.; Liu, Shengzhong; Krauss, Alan R.; Pan, Xianzheng
1997-01-01
A method and system for manufacturing diamond film. The method involves forming a fullerene vapor, providing a noble gas stream and combining the gas with the fullerene vapor, passing the combined fullerene vapor and noble gas carrier stream into a chamber, forming a plasma in the chamber causing fragmentation of the fullerene and deposition of a diamond film on a substrate.
Initial Correction versus Negative Marking in Multiple Choice Examinations
ERIC Educational Resources Information Center
Van Hecke, Tanja
2015-01-01
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Method for the removal of elemental mercury from a gas stream
Mendelsohn, Marshall H.; Huang, Hann-Sheng
1999-01-01
A method is provided to remove elemental mercury from a gas stream by reacting the gas stream with an oxidizing solution to convert the elemental mercury to soluble mercury compounds. Other constituents are also oxidized. The gas stream is then passed through a wet scrubber to remove the mercuric compounds and oxidized constituents.
A Comparison of Web-Based Standard Setting and Monitored Standard Setting.
ERIC Educational Resources Information Center
Harvey, Anne L.; Way, Walter D.
Standard setting, when carefully done, can be an expensive and time-consuming process. The modified Angoff method and the benchmark method, as utilized in this study, employ representative panels of judges to provide recommended passing scores to standard setting decision-makers. It has been considered preferable to have the judges meet in a…
Method for the removal of elemental mercury from a gas stream
Mendelsohn, M.H.; Huang, H.S.
1999-05-04
A method is provided to remove elemental mercury from a gas stream by reacting the gas stream with an oxidizing solution to convert the elemental mercury to soluble mercury compounds. Other constituents are also oxidized. The gas stream is then passed through a wet scrubber to remove the mercuric compounds and oxidized constituents. 7 figs.
ERIC Educational Resources Information Center
Dochy, Filip; Kyndt, Eva; Baeten, Marlies; Pottier, Sofie; Veestraeten, Marlies; Leuven, K. U.
2009-01-01
The aim of this study was to examine the effect of different standard setting methods on the size and composition of the borderline group, on the discrimination between different types of students and on the types of students passing with one method but failing with another. A total of 107 university students were classified into 4 different types…
Detection of contraband using microwave radiation
Toth, Richard P.; Loubriel, Guillermo M.; Bacon, Larry D.; Watson, Robert D.
2002-01-01
The present invention relates to a method and system for using microwave radiation to detect contraband hidden inside of a non-metallic container, such as a pneumatic vehicle tire. The method relies on the attenuation, retardation, time delay, or phase shift of microwave radiation as it passes through the container plus the contraband. The method is non-invasive, non-destructive, low power, and does not require physical contact with the container.
METHOD OF COATING SURFACES WITH BORON
Martin, G.R.
1949-10-11
A method of forming a thin coating of boron on metallic, glass, or other surfaces is described. The method comprises heating the article to be coated to a temperature of about 550 d C in an evacuated chamber and passing trimethyl boron, triethyl boron, or tripropyl boron in the vapor phase and under reduced pressure into contact with the heated surface causing boron to be deposited in a thin film.
Joint Doctrine for Operations in Nuclear, Biological, and Chemical (NBC) Environments
2000-07-11
groups may have or be able to acquire military, civilian, and dual-use technologies and methods that provide adequate reliability for selective...procedures, and methods . •• Patient decontamination reduces the threat of contamination-related injury to health service support (HSS) personnel and...passing warnings to workers and units throughout their sites. •• Because of the variety of delivery methods for NBC weapons and the limitations of
Passing and Catching in Rugby.
ERIC Educational Resources Information Center
Namudu, Mike M.
This booklet contains the fundamentals for rugby at the primary school level. It deals primarily with passing and catching the ball. It contains instructions on (1) holding the ball for passing, (2) passing the ball to the left--standing, (3) passing the ball to the left--running, (4) making a switch pass, (5) the scrum half's normal pass, (6) the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, R; Lee, V; Cheung, S
2016-06-15
Purpose: The increasing application of VMAT demands a more efficient workflow and QA solution. This study aims to investigate the feasibility of performing VMAT QA measurements on one linac for plans treated on other beam-matched Elekta Agility linacs. Methods: A single model was used to create 24 clinically approved VMAT plans (12 head-and-neck and 12 prostate using 6MV and 10MV respectively) on Pinnacle v9.10 (Philips, Einhoven, Netherlands). All head-and-neck plans were delivered on three beam-matched machines while all prostate cases were delivered on two beam-matched 10MV Agility machines. All plans were delivered onto PTW Octavius 4D phantom with 1500 detectormore » array (PTW, Freiburg, Germany). Reconstructed volume doses were then compared with the Pinnacle reference plans in Verisoft 6.1 under 3%/3mm gamma criteria at local dose. Plans were considered clinically acceptable if >90% of the voxels passing the gamma criteria. Results: All measurements were passed (3D gamma passing rate >90%) and the result shows that the mean difference of 3D gamma of 12 head-and-neck cases is 1.2% with standard deviation of 0.6%. While for prostate cases, the mean difference of 3D gamma is 0.9% with standard deviation of 0.7%. Maximum difference of 3D gamma of all measurements between beam-matched machines is less than 2.5%. The differences of passing rates between different machines were statistically insignificant (p>0.05). Conclusion. The result suggests that ther Conclusion: The result suggests that there exists a 3D gamma threshold, in our case 92.5%, above which the VMAT QA performed in any one of beam-matched machine will also pass in another one. Therefore, VMAT QA efficiency may be increased and phantom set up time can be saved by implementing such method. A constant performance across all beam matched machines must be maintained to make this QA approach feasible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Wang, Y
Purpose: Due to limited commissioning time, we previously only released our True beam non-FFF mode for prostate treatment. Clinical demand now pushes us to release the non-FFF mode for SRT/SBRT treatment. When re-planning on True beam previously treated SRT/SBRT cases on iX machine we found the patient specific QA pass rate was worse than iX’s, though the 2Gy/fx prostate Result had been as good. We hypothesize that in TPS the True beam DLG and MLC transmission values, of those measured during commissioning could not yet provide accurate SRS/SBRT dosimetry. Hence this work is to investigate how the TPS DLG andmore » transmission value affects Rapid Arc plans’ dosimetric accuracy. Methods: We increased DLG and transmission value of True beam in TPS such that their percentage differences against the measured matched those of iX’s. We re-calculated 2 SRT, 1 SBRT and 2 prostate plans, performed patient specific QA on these new plans and compared the results to the previous. Results: With DLG and transmission value set respectively 40 and 8% higher than the measured, the patient specific QA pass rate (at 3%/3mm) improved from 95.0 to 97.6% vs previous iX’s 97.8% in the case of SRT. In the case of SBRT, the pass rate improved from 75.2 to 93.9% vs previous iX’s 92.5%. In the case of prostate, the pass rate improved from 99.3 to 100%. The maximum dose difference in plans before and after adjusting DLG and transmission was approximately 1% of the prescription dose among all plans. Conclusion: The impact of adjusting DLG and transmission value on dosimetry might be the same among all Rapid Arc plans regardless hypofractionated or not. The large variation observed in patient specific QA pass rate might be due to the data analysis method in the QA software being more sensitive to hypofractionated plans.« less
A 2D ion chamber array audit of wedged and asymmetric fields in an inhomogeneous lung phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica; Dunn, Leon, E-mail: leon.dunn@arpansa.gov.au; Alves, Andrew
Purpose: The Australian Clinical Dosimetry Service (ACDS) has implemented a new method of a nonreference condition Level II type dosimetric audit of radiotherapy services to increase measurement accuracy and patient safety within Australia. The aim of this work is to describe the methodology, tolerances, and outcomes from the new audit. Methods: The ACDS Level II audit measures the dose delivered in 2D planes using an ionization chamber based array positioned at multiple depths. Measurements are made in rectilinear homogeneous and inhomogeneous phantoms composed of slabs of solid water and lung. Computer generated computed tomography data sets of the rectilinear phantomsmore » are supplied to the facility prior to audit for planning of a range of cases including reference fields, asymmetric fields, and wedged fields. The audit assesses 3D planning with 6 MV photons with a static (zero degree) gantry. Scoring is performed using local dose differences between the planned and measured dose within 80% of the field width. The overall audit result is determined by the maximum dose difference over all scoring points, cases, and planes. Pass (Optimal Level) is defined as maximum dose difference ≤3.3%, Pass (Action Level) is ≤5.0%, and Fail (Out of Tolerance) is >5.0%. Results: At close of 2013, the ACDS had performed 24 Level II audits. 63% of the audits passed, 33% failed, and the remaining audit was not assessable. Of the 15 audits that passed, 3 were at Pass (Action Level). The high fail rate is largely due to a systemic issue with modeling asymmetric 60° wedges which caused a delivered overdose of 5%–8%. Conclusions: The ACDS has implemented a nonreference condition Level II type audit, based on ion chamber 2D array measurements in an inhomogeneous slab phantom. The powerful diagnostic ability of this audit has allowed the ACDS to rigorously test the treatment planning systems implemented in Australian radiotherapy facilities. Recommendations from audits have led to facilities modifying clinical practice and changing planning protocols.« less
Drake, Sean M.; Qureshi, Waqas; Morse, William; Baker-Genaw, Kimberly
2015-01-01
Aim The American Board of Internal Medicine (ABIM) exam's pass rate is considered a quality measure of a residency program, yet few interventions have shown benefit in reducing the failure rate. We developed a web-based Directed Reading (DR) program with an aim to increase medical knowledge and reduce ABIM exam failure rate. Methods Internal medicine residents at our academic medical center with In-Training Examination (ITE) scores ≤35th percentile from 2007 to 2013 were enrolled in DR. The program matches residents to reading assignments based on their own ITE-failed educational objectives and provides direct electronic feedback from their teaching physicians. ABIM exam pass rates were analyzed across various groups between 2002 and 2013 to examine the effect of the DR program on residents with ITE scores ≤35 percentile pre- (2002–2006) and post-intervention (2007–2013). A time commitment survey was also given to physicians and DR residents at the end of the study. Results Residents who never scored ≤35 percentile on ITE were the most likely to pass the ABIM exam on first attempt regardless of time period. For those who ever scored ≤35 percentile on ITE, 91.9% of residents who participated in DR passed the ABIM exam on first attempt vs 85.2% of their counterparts pre-intervention (p<0.001). This showed an improvement in ABIM exam pass rate for this subset of residents after introduction of the DR program. The time survey showed that faculty used an average of 40±18 min per week to participate in DR and residents required an average of 25 min to search/read about the objective and 20 min to write a response. Conclusions Although residents who ever scored ≤35 percentile on ITE were more likely to fail ABIM exam on first attempt, those who participated in the DR program were less likely to fail than the historical control counterparts. The web-based teaching method required little time commitment by faculty. PMID:26521767
Drake, Sean M; Qureshi, Waqas; Morse, William; Baker-Genaw, Kimberly
2015-01-01
Aim The American Board of Internal Medicine (ABIM) exam's pass rate is considered a quality measure of a residency program, yet few interventions have shown benefit in reducing the failure rate. We developed a web-based Directed Reading (DR) program with an aim to increase medical knowledge and reduce ABIM exam failure rate. Methods Internal medicine residents at our academic medical center with In-Training Examination (ITE) scores ≤35th percentile from 2007 to 2013 were enrolled in DR. The program matches residents to reading assignments based on their own ITE-failed educational objectives and provides direct electronic feedback from their teaching physicians. ABIM exam pass rates were analyzed across various groups between 2002 and 2013 to examine the effect of the DR program on residents with ITE scores ≤35 percentile pre- (2002-2006) and post-intervention (2007-2013). A time commitment survey was also given to physicians and DR residents at the end of the study. Results Residents who never scored ≤35 percentile on ITE were the most likely to pass the ABIM exam on first attempt regardless of time period. For those who ever scored ≤35 percentile on ITE, 91.9% of residents who participated in DR passed the ABIM exam on first attempt vs 85.2% of their counterparts pre-intervention (p<0.001). This showed an improvement in ABIM exam pass rate for this subset of residents after introduction of the DR program. The time survey showed that faculty used an average of 40±18 min per week to participate in DR and residents required an average of 25 min to search/read about the objective and 20 min to write a response. Conclusions Although residents who ever scored ≤35 percentile on ITE were more likely to fail ABIM exam on first attempt, those who participated in the DR program were less likely to fail than the historical control counterparts. The web-based teaching method required little time commitment by faculty.
Research notes : retrofitting culverts for fish.
DOT National Transportation Integrated Search
2005-01-01
Culverts are a well established method to pass a roadway over a waterway. Standard design criteria exist for meeting the hydraulic requirements for moving the water through the culverts. However, the hydraulic conditions resulting from many culvert d...
PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, Zhouping
2017-12-01
Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.
Smart Optical Material Characterization System and Method
NASA Technical Reports Server (NTRS)
Choi, Sang Hyouk (Inventor); Park, Yeonjoon (Inventor)
2015-01-01
Disclosed is a system and method for characterizing optical materials, using steps and equipment for generating a coherent laser light, filtering the light to remove high order spatial components, collecting the filtered light and forming a parallel light beam, splitting the parallel beam into a first direction and a second direction wherein the parallel beam travelling in the second direction travels toward the material sample so that the parallel beam passes through the sample, applying various physical quantities to the sample, reflecting the beam travelling in the first direction to produce a first reflected beam, reflecting the beam that passes through the sample to produce a second reflected beam that travels back through the sample, combining the second reflected beam after it travels back though the sample with the first reflected beam, sensing the light beam produced by combining the first and second reflected beams, and processing the sensed beam to determine sample characteristics and properties.
Two part condenser for varying the rate of condensing and related method
Dobos, James G.
2007-12-11
A heat transfer apparatus, such as a condenser, is provided. The apparatus includes a first component with a first heat transfer element that has first component inlet and outlet ports through which a first fluid may pass. A second component is also included and likewise has a second heat transfer element with second component inlet and outlet ports to pass a second fluid. The first component has a body that can receive a third fluid for heat transfer with the first heat transfer element. The first and second components are releasably attachable with one another so that when attached both the first and second heat transfer elements effect heat transfer with the third fluid. Attachment and removal of the first and second components allows for the heat transfer rate of the apparatus to be varied. An associated method is also provided.
Diode-laser-pump module with integrated signal ports for pumping amplifying fibers and method
Savage-Leuchs,; Matthias, P [Woodinville, WA
2009-05-26
Apparatus and method for collimating pump light of a first wavelength from laser diode(s) into a collimated beam within an enclosure having first and second optical ports, directing pump light from the collimated beam to the first port; and directing signal light inside the enclosure between the first and second port. The signal and pump wavelengths are different. The enclosure provides a pump block having a first port that emits pump light to a gain fiber outside the enclosure and that also passes signal light either into or out of the enclosure, and another port that passes signal light either out of or into the enclosure. Some embodiments use a dichroic mirror to direct pump light to the first port and direct signal light between the first and second ports. Some embodiments include a wavelength-conversion device to change the wavelength of at least some of the signal light.
Method and apparatus for measuring enrichment of UF6
Hill, Thomas Roy [Santa Fe, NM; Ianakiev, Kiril Dimitrov [Los Alamos, NM
2011-06-07
A system and method are disclosed for determining the enrichment of .sup.235U in Uranium Hexafluoride (UF6) utilizing synthesized X-rays which are directed at a container test zone containing a sample of UF6. A detector placed behind the container test zone then detects and counts the X-rays which pass through the container and the UF6. In order to determine the portion of the attenuation due to the UF6 gas alone, this count rate may then be compared to a calibration count rate of X-rays passing through a calibration test zone which contains a vacuum, the test zone having experienced substantially similar environmental conditions as the actual test zone. Alternatively, X-rays of two differing energy levels may be alternately directed at the container, where either the container or the UF6 has a high sensitivity to the difference in the energy levels, and the other having a low sensitivity.
Standard setting for OSCEs: trial of borderline approach.
Kilminster, Sue; Roberts, Trudie
2004-01-01
OSCE examinations were held in May and June 2002 for all third and fourth year and some fifth year medical students at the University of Leeds. There has been an arbitrary pass mark of 65% for these examinations. However, we recognise that it is important to adopt a systematic approach towards standard setting in all examinations so held a trial of the borderline approach to standard setting for third and fifth year examinations. This paper reports our findings. The results for the year 3 OSCE demonstrated that the borderline approach to standard setting is feasible and offers a method to ensure that the pass standard is both justifiable and credible. It is efficient, requiring much less time than other methods and has the advantage of using the judgements of expert clinicians about actual practice. In addition it offers a way of empowering clinicians because it uses their expertise.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Improving Pharmacy Student Communication Outcomes Using Standardized Patients.
Gillette, Chris; Rudolph, Michael; Rockich-Winston, Nicole; Stanton, Robert; Anderson, H Glenn
2017-08-01
Objective. To examine whether standardized patient encounters led to an improvement in a student pharmacist-patient communication assessment compared to traditional active-learning activities within a classroom setting. Methods. A quasi-experimental study was conducted with second-year pharmacy students in a drug information and communication skills course. Student patient communication skills were assessed using high-stakes communication assessment. Results. Two hundred and twenty students' data were included. Students were significantly more likely to have higher scores on the communication assessment when they had higher undergraduate GPAs, were female, and taught using standardized patients. Similarly, students were significantly more likely to pass the assessment on the first attempt when they were female and when they were taught using standardized patients. Conclusion. Incorporating standardized patients within a communication course resulted in improved scores as well as first-time pass rates on a communication assessment than when using different methods of active learning.
Electrochemical method applicable to treatment of wastewater from nitrotriazolone production.
Wallace, Lynne; Cronin, Michael P; Day, Anthony I; Buck, Damian P
2009-03-15
Laboratory studies show that electrochemical oxidation of acidic nitrotriazolone (NTO) solutions results in complete mineralization, with ammonium nitrate as the only solution product Other products (carbon dioxide, carbon monoxide, and nitrous oxide) are eliminated as gases from the working electrode. No additional chemical loading is required for the process, and electricity isthe only input The process maytherefore represent a cost-effective and environmentally friendly method of remediation for wastewater from NTO manufacture. Electrolyses were carried out at different applied voltages and at NTO concentrations of 0.01 and 0.05 mol/L, and the results indicate that a higher oxidation rate results in a greater charge passed per mole of NTO oxidized and increased production of nitrous oxide. Mechanisms are proposed on the basis of competing oxidative pathways that account for all products formed and the total charge passed during the reaction.
Apparatus for the liquefaction of a gas and methods relating to same
Turner, Terry D [Idaho Falls, ID; Wilding, Bruce M [Idaho Falls, ID; McKellar, Michael G [Idaho Falls, ID
2009-12-29
Apparatuses and methods are provided for producing liquefied gas, such as liquefied natural gas. In one embodiment, a liquefaction plant may be coupled to a source of unpurified natural gas, such as a natural gas pipeline at a pressure letdown station. A portion of the gas is drawn off and split into a process stream and a cooling stream. The cooling stream may be sequentially pass through a compressor and an expander. The process stream may also pass through a compressor. The compressed process stream is cooled, such as by the expanded cooling stream. The cooled, compressed process stream is expanded to liquefy the natural gas. A gas-liquid separator separates the vapor from the liquid natural gas. A portion of the liquid gas may be used for additional cooling. Gas produced within the system may be recompressed for reintroduction into a receiving line.
Khripunov, Sergey; Kobtsev, Sergey; Radnatarov, Daba
2016-01-20
This work presents for the first time to the best of our knowledge a comparative efficiency analysis among various techniques of extra-cavity second harmonic generation (SHG) of continuous-wave single-frequency radiation in nonperiodically poled nonlinear crystals within a broad range of power levels. Efficiency of nonlinear radiation transformation at powers from 1 W to 10 kW was studied in three different configurations: with an external power-enhancement cavity and without the cavity in the case of single and double radiation pass through a nonlinear crystal. It is demonstrated that at power levels exceeding 1 kW, the efficiencies of methods with and without external power-enhancement cavities become comparable, whereas at even higher powers, SHG by a single or double pass through a nonlinear crystal becomes preferable because of the relatively high efficiency of nonlinear transformation and fairly simple implementation.