Lin, Yan; Ma, Changchun; Liu, Chengkang; Wang, Zhening; Yang, Jurong; Liu, Xinmu; Shen, Zhiwei; Wu, Renhua
2016-05-17
Colorectal cancer (CRC) is a growing cause of mortality in developing countries, warranting investigation into its earlier detection for optimal disease management. A metabolomics based approach provides potential for noninvasive identification of biomarkers of colorectal carcinogenesis, as well as dissection of molecular pathways of pathophysiological conditions. Here, proton nuclear magnetic resonance spectroscopy (1HNMR) -based metabolomic approach was used to profile fecal metabolites of 68 CRC patients (stage I/II=20; stage III=25 and stage IV=23) and 32 healthy controls (HC). Pattern recognition through principal component analysis (PCA) and orthogonal partial least squares-discriminant analysis (OPLS-DA) was applied on 1H-NMR processed data for dimension reduction. OPLS-DA revealed that each stage of CRC could be clearly distinguished from HC based on their metabolomic profiles. Successive analyses identified distinct disturbances to fecal metabolites of CRC patients at various stages, compared with those in cancer free controls, including reduced levels of acetate, butyrate, propionate, glucose, glutamine, and elevated quantities of succinate, proline, alanine, dimethylglycine, valine, glutamate, leucine, isoleucine and lactate. These altered fecal metabolites potentially involved in the disruption of normal bacterial ecology, malabsorption of nutrients, increased glycolysis and glutaminolysis. Our findings revealed that the fecal metabolic profiles of healthy controls can be distinguished from CRC patients, even in the early stage (stage I/II), highlighting the potential utility of NMR-based fecal metabolomics fingerprinting as predictors of earlier diagnosis in CRC patients.
Lin, Yan; Ma, Changchun; Liu, Chengkang; Wang, Zhening; Yang, Jurong; Liu, Xinmu; Shen, Zhiwei; Wu, Renhua
2016-01-01
Colorectal cancer (CRC) is a growing cause of mortality in developing countries, warranting investigation into its earlier detection for optimal disease management. A metabolomics based approach provides potential for noninvasive identification of biomarkers of colorectal carcinogenesis, as well as dissection of molecular pathways of pathophysiological conditions. Here, proton nuclear magnetic resonance spectroscopy (1HNMR) -based metabolomic approach was used to profile fecal metabolites of 68 CRC patients (stage I/II=20; stage III=25 and stage IV=23) and 32 healthy controls (HC). Pattern recognition through principal component analysis (PCA) and orthogonal partial least squares-discriminant analysis (OPLS-DA) was applied on 1H-NMR processed data for dimension reduction. OPLS-DA revealed that each stage of CRC could be clearly distinguished from HC based on their metabolomic profiles. Successive analyses identified distinct disturbances to fecal metabolites of CRC patients at various stages, compared with those in cancer free controls, including reduced levels of acetate, butyrate, propionate, glucose, glutamine, and elevated quantities of succinate, proline, alanine, dimethylglycine, valine, glutamate, leucine, isoleucine and lactate. These altered fecal metabolites potentially involved in the disruption of normal bacterial ecology, malabsorption of nutrients, increased glycolysis and glutaminolysis. Our findings revealed that the fecal metabolic profiles of healthy controls can be distinguished from CRC patients, even in the early stage (stage I/II), highlighting the potential utility of NMR-based fecal metabolomics fingerprinting as predictors of earlier diagnosis in CRC patients. PMID:27107423
Design a Fuzzy Rule-based Expert System to Aid Earlier Diagnosis of Gastric Cancer.
Safdari, Reza; Arpanahi, Hadi Kazemi; Langarizadeh, Mostafa; Ghazisaiedi, Marjan; Dargahi, Hossein; Zendehdel, Kazem
2018-01-01
Screening and health check-up programs are most important sanitary priorities, that should be undertaken to control dangerous diseases such as gastric cancer that affected by different factors. More than 50% of gastric cancer diagnoses are made during the advanced stage. Currently, there is no systematic approach for early diagnosis of gastric cancer. to develop a fuzzy expert system that can identify gastric cancer risk levels in individuals. This system was implemented in MATLAB software, Mamdani inference technique applied to simulate reasoning of experts in the field, a total of 67 fuzzy rules extracted as a rule-base based on medical expert's opinion. 50 case scenarios were used to evaluate the system, the information of case reports is given to the system to find risk level of each case report then obtained results were compared with expert's diagnosis. Results revealed that sensitivity was 92.1% and the specificity was 83.1%. The results show that is possible to develop a system that can identify High risk individuals for gastric cancer. The system can lead to earlier diagnosis, this may facilitate early treatment and reduce gastric cancer mortality rate.
Treweek, Shaun; Francis, Jill J; Bonetti, Debbie; Barnett, Karen; Eccles, Martin P; Hudson, Jemma; Jones, Claire; Pitts, Nigel B; Ricketts, Ian W; Sullivan, Frank; Weal, Mark; MacLennan, Graeme
2016-12-01
Intervention Modeling Experiments (IMEs) are a way of developing and testing behavior change interventions before a trial. We aimed to test this methodology in a Web-based IME that replicated the trial component of an earlier, paper-based IME. Three-arm, Web-based randomized evaluation of two interventions (persuasive communication and action plan) and a "no intervention" comparator. The interventions were designed to reduce the number of antibiotic prescriptions in the management of uncomplicated upper respiratory tract infection. General practitioners (GPs) were invited to complete an online questionnaire and eight clinical scenarios where an antibiotic might be considered. One hundred twenty-nine GPs completed the questionnaire. GPs receiving the persuasive communication did not prescribe an antibiotic in 0.70 more scenarios (95% confidence interval [CI] = 0.17-1.24) than those in the control arm. For the action plan, GPs did not prescribe an antibiotic in 0.63 (95% CI = 0.11-1.15) more scenarios than those in the control arm. Unlike the earlier IME, behavioral intention was unaffected by the interventions; this may be due to a smaller sample size than intended. A Web-based IME largely replicated the findings of an earlier paper-based study, providing some grounds for confidence in the IME methodology. Copyright © 2016 Elsevier Inc. All rights reserved.
Angeletti, C; Pezzotti, P; Antinori, A; Mammone, A; Navarra, A; Orchi, N; Lorenzini, P; Mecozzi, A; Ammassari, A; Murachelli, S; Ippolito, G; Girardi, E
2014-03-01
Combination antiretroviral therapy (cART) has become the main driver of total costs of caring for persons living with HIV (PLHIV). The present study estimated the short/medium-term cost trends in response to the recent evolution of national guidelines and regional therapeutic protocols for cART in Italy. We developed a deterministic mathematical model that was calibrated using epidemic data for Lazio, a region located in central Italy with about six million inhabitants. In the Base Case Scenario, the estimated number of PLHIV in the Lazio region increased over the period 2012-2016 from 14 414 to 17 179. Over the same period, the average projected annual cost for treating the HIV-infected population was €147.0 million. An earlier cART initiation resulted in a rise of 2.3% in the average estimated annual cost, whereas an increase from 27% to 50% in the proportion of naïve subjects starting cART with a nonnucleoside reverse transcriptase inhibitor (NNRTI)-based regimen resulted in a reduction of 0.3%. Simplification strategies based on NNRTIs co-formulated in a single tablet regimen and protease inhibitor/ritonavir-boosted monotherapy produced an overall reduction in average annual costs of 1.5%. A further average saving of 3.3% resulted from the introduction of generic antiretroviral drugs. In the medium term, cost saving interventions could finance the increase in costs resulting from the inertial growth in the number of patients requiring treatment and from the earlier treatment initiation recommended in recent guidelines. © 2013 British HIV Association.
Nouér, Simone A; Nucci, Marcio; Kumar, Naveen Sanath; Grazziutti, Monica; Barlogie, Bart; Anaissie, Elias
2011-10-01
Current criteria for assessing treatment response of invasive aspergillosis (IA) rely on nonspecific subjective parameters. We hypothesized that an Aspergillus-specific response definition based on the kinetics of serum Aspergillus galactomannan index (GMI) would provide earlier and more objective response assessment. We compared the 6-week European Organization for Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG) response criteria with GMI-based response among 115 cancer patients with IA. Success according to GMI required survival with repeatedly negative GMI for ≥2 weeks. Time to response and agreement between the 2 definitions were the study endpoints. Success according to EORTC/MSG and GMI criteria was observed in 73 patients (63%) and 83 patients (72%), respectively. The GMI-based response was determined at a median of 21 days after treatment initiation (range, 15-41 days), 3 weeks before the EORTC/MSG time point, in 72 (87%) of 83 responders. Agreement between definitions was shown in all 32 nonresponders and in 73 of the 83 responders (91% overall), with an excellent κ correlation coefficient of 0.819. Among 10 patients with discordant response (EORTC/MSG failure, GMI success), 1 is alive without IA 3 years after diagnosis; for the other, aspergillosis could not be detected at autopsy. The presence of other life-threatening complications in the remaining 8 patients indicates that IA had resolved. The Aspergillus-specific GMI-based criteria compare favorably to current response definitions for IA and significantly shorten time to response assessment. These criteria rely on a simple, reproducible, objective, and Aspergillus-specific test and should serve as the primary endpoint in trials of IA.
Cohen, Joshua D.; Javed, Ammar A.; Thoburn, Christopher; Wong, Fay; Tie, Jeanne; Gibbs, Peter; Schmidt, C. Max; Yip-Schneider, Michele T.; Allen, Peter J.; Schattner, Mark; Brand, Randall E.; Singhi, Aatur D.; Petersen, Gloria M.; Hong, Seung-Mo; Kim, Song Cheol; Falconi, Massimo; Doglioni, Claudio; Weiss, Matthew J.; Ahuja, Nita; He, Jin; Makary, Martin A.; Maitra, Anirban; Hanash, Samir M.; Dal Molin, Marco; Wang, Yuxuan; Li, Lu; Ptak, Janine; Dobbyn, Lisa; Schaefer, Joy; Silliman, Natalie; Popoli, Maria; Goggins, Michael G.; Hruban, Ralph H.; Wolfgang, Christopher L.; Klein, Alison P.; Tomasetti, Cristian; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Lennon, Anne Marie
2017-01-01
The earlier diagnosis of cancer is one of the keys to reducing cancer deaths in the future. Here we describe our efforts to develop a noninvasive blood test for the detection of pancreatic ductal adenocarcinoma. We combined blood tests for KRAS gene mutations with carefully thresholded protein biomarkers to determine whether the combination of these markers was superior to any single marker. The cohort tested included 221 patients with resectable pancreatic ductal adenocarcinomas and 182 control patients without known cancer. KRAS mutations were detected in the plasma of 66 patients (30%), and every mutation found in the plasma was identical to that subsequently found in the patient’s primary tumor (100% concordance). The use of KRAS in conjunction with four thresholded protein biomarkers increased the sensitivity to 64%. Only one of the 182 plasma samples from the control cohort was positive for any of the DNA or protein biomarkers (99.5% specificity). This combinatorial approach may prove useful for the earlier detection of many cancer types. PMID:28874546
Cohen, Joshua D; Javed, Ammar A; Thoburn, Christopher; Wong, Fay; Tie, Jeanne; Gibbs, Peter; Schmidt, C Max; Yip-Schneider, Michele T; Allen, Peter J; Schattner, Mark; Brand, Randall E; Singhi, Aatur D; Petersen, Gloria M; Hong, Seung-Mo; Kim, Song Cheol; Falconi, Massimo; Doglioni, Claudio; Weiss, Matthew J; Ahuja, Nita; He, Jin; Makary, Martin A; Maitra, Anirban; Hanash, Samir M; Dal Molin, Marco; Wang, Yuxuan; Li, Lu; Ptak, Janine; Dobbyn, Lisa; Schaefer, Joy; Silliman, Natalie; Popoli, Maria; Goggins, Michael G; Hruban, Ralph H; Wolfgang, Christopher L; Klein, Alison P; Tomasetti, Cristian; Papadopoulos, Nickolas; Kinzler, Kenneth W; Vogelstein, Bert; Lennon, Anne Marie
2017-09-19
The earlier diagnosis of cancer is one of the keys to reducing cancer deaths in the future. Here we describe our efforts to develop a noninvasive blood test for the detection of pancreatic ductal adenocarcinoma. We combined blood tests for KRAS gene mutations with carefully thresholded protein biomarkers to determine whether the combination of these markers was superior to any single marker. The cohort tested included 221 patients with resectable pancreatic ductal adenocarcinomas and 182 control patients without known cancer. KRAS mutations were detected in the plasma of 66 patients (30%), and every mutation found in the plasma was identical to that subsequently found in the patient's primary tumor (100% concordance). The use of KRAS in conjunction with four thresholded protein biomarkers increased the sensitivity to 64%. Only one of the 182 plasma samples from the control cohort was positive for any of the DNA or protein biomarkers (99.5% specificity). This combinatorial approach may prove useful for the earlier detection of many cancer types.
Data base to compare calculations and observations
Tichler, J.L.
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
Daly, Walter J.
2004-01-01
Thirteenth century medical science, like medieval scholarship in general, was directed at reconciliation of Greek philosophy/science with prevailing medieval theology and philosophy. Peter of Spain [later Pope John XXI] was the leading medical scholar of his time. Peter wrote a long book on the soul. Imbedded in it was a chapter on the motion of the heart. Peter's De Motu was based on his own medical experience and Galen's De Usu Partium and De Usu Respirationis and De Usu Pulsuum. This earlier De Motu defines a point on the continuum of intellectual development leading to us and into the future. Thirteenth century scholarship relied on past authority to a degree that continues to puzzle and beg explanation. Images Fig. 1 PMID:17060956
Tamaru, Yoshiki; Naito, Yasuo; Nishikawa, Takashi
2017-11-01
Elderly people are less able to manipulate objects skilfully than young adults. Although previous studies have examined age-related deterioration of hand movements with a focus on the phase after grasping objects, the changes in the reaching phase have not been studied thus far. We aimed to examine whether changes in hand shape patterns during the reaching phase of grasping movements differ between young adults and the elderly. Ten healthy elderly adults and 10 healthy young adults were examined using the Simple Test for Evaluating Hand Functions and kinetic analysis of hand pre-shaping reach-to-grasp tasks. The results were then compared between the two groups. For kinetic analysis, we measured the time of peak tangential velocity of the wrist and the inter-fingertip distance (the distance between the tips of the thumb and index finger) at different time points. The results showed that the elderly group's performance on the Simple Test for Evaluating Hand Functions was significantly lower than that of the young adult group, irrespective of whether the dominant or non-dominant hand was used, indicating deterioration of hand movement in the elderly. The peak tangential velocity of the wrist in either hand appeared significantly earlier in the elderly group than in the young adult group. The elderly group also showed larger inter-fingertip distances with arch-like fingertip trajectories compared to the young adult group for all object sizes. To perform accurate prehension, elderly people have an earlier peak tangential velocity point than young adults. This allows for a longer adjustment time for reaching and grasping movements and for reducing errors in object prehension by opening the hand and fingers wider. Elderly individuals gradually modify their strategy based on previous successes and failures during daily living to compensate for their decline in dexterity and operational capabilities. © 2017 Japanese Psychogeriatric Society.
Caputo, Maria Luce; Muschietti, Sandro; Burkart, Roman; Benvenuti, Claudio; Conte, Giulio; Regoli, François; Mauri, Romano; Klersy, Catherine; Moccetti, Tiziano; Auricchio, Angelo
2017-05-01
We compared the time to initiation of cardiopulmonary resuscitation (CPR) by lay responders and/or first responders alerted either via Short Message Service (SMS) or by using a mobile application-based alert system (APP). The Ticino Registry of Cardiac Arrest collects all data about out-of-hospital cardiac arrests (OHCAs) occurring in the Canton of Ticino. At the time of a bystander's call, the EMS dispatcher sends one ambulance and alerts the first-responders network made up of police officers or fire brigade equipped with an automatic external defibrillator, the so called "traditional" first responders, and - if the scene was considered safe - lay responders as well. We evaluated the time from call to arrival of traditional first responders and/or lay responders when alerted either via SMS or the new developed mobile APP. Over the study period 593 OHCAs have occurred. Notification to the first responders network was sent via SMS in 198 cases and via mobile APP in 134 cases. Median time to first responder/lay responder arrival on scene was significantly reduced by the APP-based system (3.5 [2.8-5.2]) compared to the SMS-based system (5.6 [4.2-8.5] min, p 0.0001). The proportion of lay responders arriving first on the scene significantly increased (70% vs. 15%, p<0.01) with the APP. Earlier arrival of a first responder or of a lay responder determined a higher survival rate. The mobile APP system is highly efficient in the recruitment of first responders, significantly reducing the time to the initiation of CPR thus increasing survival rates. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Goodwin, Shikha J.; Blackman, Rachael K.; Sakellaridi, Sofia
2012-01-01
Human cognition is characterized by flexibility, the ability to select not only which action but which cognitive process to engage to best achieve the current behavioral objective. The ability to tailor information processing in the brain to rules, goals, or context is typically referred to as executive control, and although there is consensus that prefrontal cortex is importantly involved, at present we have an incomplete understanding of how computational flexibility is implemented at the level of prefrontal neurons and networks. To better understand the neural mechanisms of computational flexibility, we simultaneously recorded the electrical activity of groups of single neurons within prefrontal and posterior parietal cortex of monkeys performing a task that required executive control of spatial cognitive processing. In this task, monkeys applied different spatial categorization rules to reassign the same set of visual stimuli to alternative categories on a trial-by-trial basis. We found that single neurons were activated to represent spatially defined categories in a manner that was rule dependent, providing a physiological signature of a cognitive process that was implemented under executive control. We found also that neural signals coding rule-dependent categories were distributed between the parietal and prefrontal cortex—however, not equally. Rule-dependent category signals were stronger, more powerfully modulated by the rule, and earlier to emerge in prefrontal cortex relative to parietal cortex. This suggests that prefrontal cortex may initiate the switch in neural representation at a network level that is important for computational flexibility. PMID:22399773
SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...
[Biometric bases: basic concepts of probability calculation].
Dinya, E
1998-04-26
The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.
DFT calculation of pKa’s for dimethoxypyrimidinylsalicylic based herbicides
NASA Astrophysics Data System (ADS)
Delgado, Eduardo J.
2009-03-01
Dimethoxypyrimidinylsalicylic derived compounds show potent herbicidal activity as a result of the inhibition of acetohydroxyacid synthase, the first common enzyme in the biosynthetic pathway of the branched-chain aminoacids (valine, leucine and isoleucine) in plants, bacteria and fungi. Despite its practical importance, this family of compounds have been poorly characterized from a physico-chemical point of view. Thus for instance, their pK a's have not been reported earlier neither experimentally nor theoretically. In this study, the acid-dissociation constants of 39 dimethoxypyrimidinylsalicylic derived herbicides are calculated by DFT methods at B3LYP/6-31G(d,p) level of theory. The calculated values are validated by two checking tests based on the Hammett equation.
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
Toward Explaining Earlier Retirement after 1970.
ERIC Educational Resources Information Center
Ippolito, Richard A.
1990-01-01
Rule changes in the social security system and pension plans suggest that labor force participation rates for men aged 55 to 64 fell by 20 percent from 1970 through 1986 because of the increase in social security benefits and a change in private pension rules encouraging earlier retirement. (Author/JOW)
Software-Based Visual Loan Calculator For Banking Industry
NASA Astrophysics Data System (ADS)
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Nomeir, Amin A; Pramanik, Birendra N; Heimark, Larry; Bennett, Frank; Veals, John; Bartner, Peter; Hilbert, Maryjane; Saksena, Anil; McNamara, Paul; Girijavallabhan, Viyyoor; Ganguly, Ashit K; Lovey, Raymond; Pike, Russell; Wang, Haiyan; Liu, Yi-Tsung; Kumari, Pramila; Korfmacher, Walter; Lin, Chin-Chung; Cacciapuoti, Anthony; Loebenberg, David; Hare, Roberta; Miller, George; Pickett, Cecil
2008-04-01
Posaconazole (SCH 56592) is a novel triazole antifungal drug that is marketed in Europe and the United States under the trade name 'Noxafil' for prophylaxis against invasive fungal infections. SCH 56592 was discovered as a possible active metabolite of SCH 51048, an earlier lead. Initial studies have shown that serum concentrations determined by a microbiological assay were higher than those determined by HPLC from animals dosed with SCH 51048. Subsequently, several animals species were dosed with (3)H-SCH 51048 and the serum was analyzed for total radioactivity, SCH 51048 concentration and antifungal activity. The antifungal activity was higher than that expected based on SCH 51048 serum concentrations, confirming the presence of active metabolite(s). Metabolite profiling of serum samples at selected time intervals pinpointed the peak that was suspected to be the active metabolite. Consequently, (3)H-SCH 51048 was administered to a large group of mice, the serum was harvested and the metabolite was isolated by extraction and semipreparative HPLC. LC-MS/MS analysis suggested that the active metabolite is a secondary alcohol with the hydroxyl group in the aliphatic side chain of SCH 51048. All corresponding monohydroxylated diastereomeric mixtures were synthesized and characterized. The HPLC retention time and LC-MS/MS spectra of the diastereomeric secondary alcohols of SCH 51048 were similar to those of the isolated active metabolite. Finally, all corresponding individual monohydroxylated diasteriomers were synthesized and evaluated for in vitro and in vivo antifungal potencies, as well as pharmacokinetics. SCH 56592 emerged as the candidate with the best overall profile.
Yen, Amy Ming-Fang; Boucher, Barbara J; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Chen, Sam Li-Sheng; Huang, Kuo-Chin; Chen, Hsiu-Hsi
2016-08-02
Transgenerational effects of paternal Areca catechu nut chewing on offspring metabolic syndrome (MetS) risk in humans, on obesity and diabetes mellitus experimentally, and of paternal smoking on offspring obesity, are reported, likely attributable to genetic and epigenetic effects previously reported in betel-associated disease. We aimed to determine the effects of paternal smoking, and betel chewing, on the risks of early MetS in human offspring. The 13 179 parent-child trios identified from 238 364 Taiwanese aged ≥20 years screened at 2 community-based integrated screening sessions were tested for the effects of paternal smoking, areca nut chewing, and their duration prefatherhood on age of detecting offspring MetS at screen by using a Cox proportional hazards regression model. Offspring MetS risks increased with prefatherhood paternal areca nutusage (adjusted hazard ratio, 1.77; 95% confidence interval [CI], 1.23-2.53) versus nonchewing fathers (adjusted hazard ratio, 3.28; 95% CI, 1.67-6.43) with >10 years paternal betel chewing, 1.62 (95% CI, 0.88-2.96) for 5 to 9 years, and 1.42 (95% CI, 0.80-2.54) for <5 years betel usage prefatherhood (Ptrend=0.0002), with increased risk (adjusted hazard ratio, 1.95; 95% CI, 1.26-3.04) for paternal areca nut usage from 20 to 29 years of age, versus from >30 years of age (adjusted hazard ratio,1.61; 95% CI, 0.22-11.69). MetS offspring risk for paternal smoking increased dosewise (Ptrend<0.0001) with earlier age of onset (Ptrend=0.0009), independently. Longer duration of paternal betel quid chewing and smoking, prefatherhood, independently predicted early occurrence of incident MetS in offspring, corroborating previously reported transgenerational effects of these habits, and supporting the need for habit-cessation program provision. © 2016 American Heart Association, Inc.
Earlier snowmelt and warming lead to earlier but not necessarily more plant growth.
Livensperger, Carolyn; Steltzer, Heidi; Darrouzet-Nardi, Anthony; Sullivan, Patrick F; Wallenstein, Matthew; Weintraub, Michael N
2016-01-01
Climate change over the past ∼50 years has resulted in earlier occurrence of plant life-cycle events for many species. Across temperate, boreal and polar latitudes, earlier seasonal warming is considered the key mechanism leading to earlier leaf expansion and growth. Yet, in seasonally snow-covered ecosystems, the timing of spring plant growth may also be cued by snowmelt, which may occur earlier in a warmer climate. Multiple environmental cues protect plants from growing too early, but to understand how climate change will alter the timing and magnitude of plant growth, experiments need to independently manipulate temperature and snowmelt. Here, we demonstrate that altered seasonality through experimental warming and earlier snowmelt led to earlier plant growth, but the aboveground production response varied among plant functional groups. Earlier snowmelt without warming led to early leaf emergence, but often slowed the rate of leaf expansion and had limited effects on aboveground production. Experimental warming alone had small and inconsistent effects on aboveground phenology, while the effect of the combined treatment resembled that of early snowmelt alone. Experimental warming led to greater aboveground production among the graminoids, limited changes among deciduous shrubs and decreased production in one of the dominant evergreen shrubs. As a result, we predict that early onset of the growing season may favour early growing plant species, even those that do not shift the timing of leaf expansion. Published by Oxford University Press on behalf of the Annals of Botany Company.
Pcetk: A pDynamo-based Toolkit for Protonation State Calculations in Proteins.
Feliks, Mikolaj; Field, Martin J
2015-10-26
Pcetk (a pDynamo-based continuum electrostatic toolkit) is an open-source, object-oriented toolkit for the calculation of proton binding energetics in proteins. The toolkit is a module of the pDynamo software library, combining the versatility of the Python scripting language and the efficiency of the compiled languages, C and Cython. In the toolkit, we have connected pDynamo to the external Poisson-Boltzmann solver, extended-MEAD. Our goal was to provide a modern and extensible environment for the calculation of protonation states, electrostatic energies, titration curves, and other electrostatic-dependent properties of proteins. Pcetk is freely available under the CeCILL license, which is compatible with the GNU General Public License. The toolkit can be found on the Web at the address http://github.com/mfx9/pcetk. The calculation of protonation states in proteins requires a knowledge of pKa values of protonatable groups in aqueous solution. However, for some groups, such as protonatable ligands bound to protein, the pKa aq values are often difficult to obtain from experiment. As a complement to Pcetk, we revisit an earlier computational method for the estimation of pKa aq values that has an accuracy of ± 0.5 pKa-units or better. Finally, we verify the Pcetk module and the method for estimating pKa aq values with different model cases.
Sensor Based Engine Life Calculation: A Probabilistic Perspective
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei; Chen, Philip
2003-01-01
It is generally known that an engine component will accumulate damage (life usage) during its lifetime of use in a harsh operating environment. The commonly used cycle count for engine component usage monitoring has an inherent range of uncertainty which can be overly costly or potentially less safe from an operational standpoint. With the advance of computer technology, engine operation modeling, and the understanding of damage accumulation physics, it is possible (and desirable) to use the available sensor information to make a more accurate assessment of engine component usage. This paper describes a probabilistic approach to quantify the effects of engine operating parameter uncertainties on the thermomechanical fatigue (TMF) life of a selected engine part. A closed-loop engine simulation with a TMF life model is used to calculate the life consumption of different mission cycles. A Monte Carlo simulation approach is used to generate the statistical life usage profile for different operating assumptions. The probabilities of failure of different operating conditions are compared to illustrate the importance of the engine component life calculation using sensor information. The results of this study clearly show that a sensor-based life cycle calculation can greatly reduce the risk of component failure as well as extend on-wing component life by avoiding unnecessary maintenance actions.
Validation of GPU based TomoTherapy dose calculation engine.
Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond
2012-04-01
The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.
Rapid Parallel Calculation of shell Element Based On GPU
NASA Astrophysics Data System (ADS)
Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao
2010-06-01
Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.
Calculation and Study of Graphene Conductivity Based on Terahertz Spectroscopy
NASA Astrophysics Data System (ADS)
Feng, Xiaodong; Hu, Min; Zhou, Jun; Liu, Shenggang
2017-07-01
Based on terahertz time-domain spectroscopy system and two-dimensional scanning control system, terahertz transmission and reflection intensity mapping images on a graphene film are obtained, respectively. Then, graphene conductivity mapping images in the frequency range 0.5 to 2.5 THz are acquired according to the calculation formula. The conductivity of graphene at some typical regions is fitted by Drude-Smith formula to quantitatively compare the transmission and reflection measurements. The results show that terahertz reflection spectroscopy has a higher signal-to-noise ratio with less interference of impurities on the back of substrates. The effect of a red laser excitation on the graphene conductivity by terahertz time-domain transmission spectroscopy is also studied. The results show that the graphene conductivity in the excitation region is enhanced while that in the adjacent area is weakened which indicates carriers transport in graphene under laser excitation. This paper can make great contribution to the study on graphene electrical and optical properties in the terahertz regime and help design graphene terahertz devices.
Ice flood velocity calculating approach based on single view metrology
NASA Astrophysics Data System (ADS)
Wu, X.; Xu, L.
2017-02-01
Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.
Accreting Binary Populations in the Earlier Universe
NASA Technical Reports Server (NTRS)
Hornschemeier, Ann
2010-01-01
It is now understood that X-ray binaries dominate the hard X-ray emission from normal star-forming galaxies. Thanks to the deepest (2-4 Ms) Chandra surveys, such galaxies are now being studied in X-rays out to z approximates 4. Interesting X-ray stacking results (based on 30+ galaxies per redshift bin) suggest that the mean rest-frame 2-10 keV luminosity from z=3-4 Lyman break galaxies (LBGs), is comparable to the most powerful starburst galaxies in the local Universe. This result possibly indicates a similar production mechanism for accreting binaries over large cosmological timescales. To understand and constrain better the production of X-ray binaries in high-redshift LBGs, we have utilized XMM-Newton observations of a small sample of z approximates 0.1 GALEX-selected Ultraviolet-Luminous Galaxies (UVLGs); local analogs to high-redshift LBGs. Our observations enable us to study the X-ray emission from LBG-like galaxies on an individual basis, thus allowing us to constrain object-to-object variances in this population. We supplement these results with X-ray stacking constraints using the new 3.2 Ms Chandra Deep Field-South (completed spring 2010) and LBG candidates selected from HST, Swift UVOT, and ground-based data. These measurements provide new X-ray constraints that sample well the entire z=0-4 baseline
Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.
Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois
2013-01-01
Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma
Physics based calculation of the fine structure constant
Lestone, John Paul
2009-01-01
We assume that the coupling between particles and photons is defined by a surface area and a temperature, and that the square of the temperature is the inverse of the surface area ({Dirac_h}=c= 1). By making assumptions regarding stimulated emission and effects associated with the finite length of a string that forms the particle surface, the fine structure constant is calculated to be {approx}1/137.04. The corresponding calculated fundamental unit of charge is 1.6021 x 10{sup -19} C.
Ruiz, B C; Tucker, W K; Kirby, R R
1975-01-01
With a desk-top, programmable calculator, it is now possible to do complex, previously time-consuming computations in the blood-gas laboratory. The authors have developed a program with the necessary algorithms for temperature correction of blood gases and calculation of acid-base variables and intrapulmonary shunt. It was necessary to develop formulas for the Po2 temperature-correction coefficient, the oxyhemoglobin-dissociation curve for adults (withe necessary adjustments for fetal blood), and changes in water vapor pressure due to variation in body temperature. Using this program in conjuction with a Monroe 1860-21 statistical programmable calculator, it is possible to temperature-correct pH,Pco2, and Po2. The machine will compute alveolar-arterial oxygen tension gradient, oxygen saturation (So2), oxygen content (Co2), actual HCO minus 3 and a modified base excess. If arterial blood and mixed venous blood are obtained, the calculator will print out intrapulmonary shunt data (Qs/Qt) and arteriovenous oxygen differences (a minus vDo2). There also is a formula to compute P50 if pH,Pco2,Po2, and measured So2 from two samples of tonometered blood (one above 50 per cent and one below 50 per cent saturation) are put into the calculator.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.
Many-body calculations with deuteron based single-particle bases and their associated natural orbits
NASA Astrophysics Data System (ADS)
Puddu, G.
2018-06-01
We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.
Formation flying benefits based on vortex lattice calculations
NASA Technical Reports Server (NTRS)
Maskew, B.
1977-01-01
A quadrilateral vortex-lattice method was applied to a formation of three wings to calculate force and moment data for use in estimating potential benefits of flying aircraft in formation on extended range missions, and of anticipating the control problems which may exist. The investigation led to two types of formation having virtually the same overall benefits for the formation as a whole, i.e., a V or echelon formation and a double row formation (with two staggered rows of aircraft). These formations have unequal savings on aircraft within the formation, but this allows large longitudinal spacings between aircraft which is preferable to the small spacing required in formations having equal benefits for all aircraft. A reasonable trade-off between a practical formation size and range benefit seems to lie at about three to five aircraft with corresponding maximum potential range increases of about 46 percent to 67 percent. At this time it is not known what fraction of this potential range increase is achievable in practice.
QED Based Calculation of the Fine Structure Constant
Lestone, John Paul
2016-10-13
Quantum electrodynamics is complex and its associated mathematics can appear overwhelming for those not trained in this field. Here, semi-classical approaches are used to obtain a more intuitive feel for what causes electrostatics, and the anomalous magnetic moment of the electron. These intuitive arguments lead to a possible answer to the question of the nature of charge. Virtual photons, with a reduced wavelength of λ, are assumed to interact with isolated electrons with a cross section of πλ 2. This interaction is assumed to generate time-reversed virtual photons that are capable of seeking out and interacting with other electrons. Thismore » exchange of virtual photons between particles is assumed to generate and define the strength of electromagnetism. With the inclusion of near-field effects the model presented here gives a fine structure constant of ~1/137 and an anomalous magnetic moment of the electron of ~0.00116. These calculations support the possibility that near-field corrections are the key to understanding the numerical value of the dimensionless fine structure constant.« less
Coupled-cluster based basis sets for valence correlation calculations
Claudino, Daniel; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu; Gargano, Ricardo
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These newmore » sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via 〈r{sup n}〉 (−3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within “chemical accuracy” of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.« less
UAV-based NDVI calculation over grassland: An alternative approach
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Calculation of thermomechanical fatigue life based on isothermal behavior
NASA Technical Reports Server (NTRS)
Halford, Gary R.; Saltsman, James F.
1987-01-01
The isothermal and thermomechanical fatigue (TMF) crack initiation response of a hypothetical material was analyzed. Expected thermomechanical behavior was evaluated numerically based on simple, isothermal, cyclic stress-strain - time characteristics and on strainrange versus cyclic life relations that have been assigned to the material. The attempt was made to establish basic minimum requirements for the development of a physically accurate TMF life-prediction model. A worthy method must be able to deal with the simplest of conditions: that is, those for which thermal cycling, per se, introduces no damage mechanisms other than those found in isothermal behavior. Under these assumed conditions, the TMF life should be obtained uniquely from known isothermal behavior. The ramifications of making more complex assumptions will be dealt with in future studies. Although analyses are only in their early stages, considerable insight has been gained in understanding the characteristics of several existing high-temperature life-prediction methods. The present work indicates that the most viable damage parameter is based on the inelastic strainrange.
Reducing Older Driver Motor Vehicle Collisions via Earlier Cataract Surgery
Mennemeyer, Stephen T.; Owsley, Cynthia; McGwin, Gerald
2013-01-01
Older adults who undergo cataract extraction have roughly half the rate of motor vehicle collision (MVC) involvement per mile driven compared to cataract patients who do not elect cataract surgery. Currently in the U.S., most insurers do not allow payment for cataract surgery based upon the findings of a vision exam unless accompanied by an individual’s complaint of visual difficulties that seriously interfere with driving or other daily activities and individuals themselves may be slow or reluctant to complain and seek relief. As a consequence, surgery tends to occur after significant vision problems have emerged. We hypothesize that a proactive policy encouraging cataract surgery earlier for a lesser level of complaint would significantly reduce MVCs among older drivers. We used a Monte Carlo model to simulate the MVC experience of the U.S. population from age 60 to 89 under alternative protocols for the timing of cataract surgery which we call “Current Practice” (CP) and “Earlier Surgery” (ES). Our base model finds, from a societal perspective with undiscounted 2010 dollars, that switching to ES from CP reduces by about 21% the average number of MVCs, fatalities, and MVC cost per person. The net effect on total cost – all MVC costs plus cataract surgery expenditures -- is a reduction of about 16%. Quality Adjusted Life Years would increase by about 5%. From the perspective of payers for healthcare, the switch would increase cataract surgery expenditure for ages 65+ by about 8% and for ages 60 to 64 by about 47% but these expenditures are substantially offset after age 65 by reductions in the medical and emergency services component of MVC cost. Similar results occur with discounting at 3% and with various sensitivity analyses. We conclude that a policy of ES would significantly reduce MVCs and their associated consequences. PMID:23369786
Simulation and analysis of main steam control system based on heat transfer calculation
NASA Astrophysics Data System (ADS)
Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai
2018-05-01
In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.
The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.
Youn, Ahrim; Wang, Shuang
2018-01-01
Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .
Glass viscosity calculation based on a global statistical modelling approach
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurementmore » and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.« less
Atomic structure data based on average-atom model for opacity calculations in astrophysical plasmas
NASA Astrophysics Data System (ADS)
Trzhaskovskaya, M. B.; Nikulin, V. K.
2018-03-01
Influence of the plasmas parameters on the electron structure of ions in astrophysical plasmas is studied on the basis of the average-atom model in the local thermodynamic equilibrium approximation. The relativistic Dirac-Slater method is used for the electron density estimation. The emphasis is on the investigation of an impact of the plasmas temperature and density on the ionization stages required for calculations of the plasmas opacities. The level population distributions and level energy spectra are calculated and analyzed for all ions with 6 ≤ Z ≤ 32 occurring in astrophysical plasmas. The plasma temperature range 2 - 200 eV and the density range 2 - 100 mg/cm3 are considered. The validity of the method used is supported by good agreement between our values of ionization stages for a number of ions, from oxygen up to uranium, and results obtained earlier by various methods among which are more complicated procedures.
Environment-based pin-power reconstruction method for homogeneous core calculations
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
Scientific Knowledge Suppresses but Does Not Supplant Earlier Intuitions
ERIC Educational Resources Information Center
Shtulman, Andrew; Valcarcel, Joshua
2012-01-01
When students learn scientific theories that conflict with their earlier, naive theories, what happens to the earlier theories? Are they overwritten or merely suppressed? We investigated this question by devising and implementing a novel speeded-reasoning task. Adults with many years of science education verified two types of statements as quickly…
Duan, Yong; Wu, Chun; Chowdhury, Shibasish; Lee, Mathew C; Xiong, Guoming; Zhang, Wei; Yang, Rong; Cieplak, Piotr; Luo, Ray; Lee, Taisung; Caldwell, James; Wang, Junmei; Kollman, Peter
2003-12-01
Molecular mechanics models have been applied extensively to study the dynamics of proteins and nucleic acids. Here we report the development of a third-generation point-charge all-atom force field for proteins. Following the earlier approach of Cornell et al., the charge set was obtained by fitting to the electrostatic potentials of dipeptides calculated using B3LYP/cc-pVTZ//HF/6-31G** quantum mechanical methods. The main-chain torsion parameters were obtained by fitting to the energy profiles of Ace-Ala-Nme and Ace-Gly-Nme di-peptides calculated using MP2/cc-pVTZ//HF/6-31G** quantum mechanical methods. All other parameters were taken from the existing AMBER data base. The major departure from previous force fields is that all quantum mechanical calculations were done in the condensed phase with continuum solvent models and an effective dielectric constant of epsilon = 4. We anticipate that this force field parameter set will address certain critical short comings of previous force fields in condensed-phase simulations of proteins. Initial tests on peptides demonstrated a high-degree of similarity between the calculated and the statistically measured Ramanchandran maps for both Ace-Gly-Nme and Ace-Ala-Nme di-peptides. Some highlights of our results include (1) well-preserved balance between the extended and helical region distributions, and (2) favorable type-II poly-proline helical region in agreement with recent experiments. Backward compatibility between the new and Cornell et al. charge sets, as judged by overall agreement between dipole moments, allows a smooth transition to the new force field in the area of ligand-binding calculations. Test simulations on a large set of proteins are also discussed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1999-2012, 2003
Earlier Age at Menopause, Work and Tobacco Smoke Exposure
Fleming, Lora E; Levis, Silvina; LeBlanc, William G; Dietz, Noella A; Arheart, Kristopher L; Wilkinson, James D; Clark, John; Serdar, Berrin; Davila, Evelyn P; Lee, David J
2009-01-01
Objective Earlier age at menopause onset has been associated with increased all cause, cardiovascular, and cancer mortality risks. Risk of earlier age at menopause associated with primary and secondary tobacco smoke exposure was assessed. Design Cross-sectional study using a nationally representative sample of US women. Methods 7596 women participants (representing an estimated 79 million US women) from the National Health and Nutrition Examination Survey III were asked: time since last menstrual period, occupation, and tobacco use (including home and workplace secondhand smoke (SHS) exposure). Blood cotinine and follicle-stimulating hormone (FSH) levels were assessed. Logistic regressions for the odds of earlier age at menopause, stratified on race/ethnicity in women 25-50 years and adjusted for survey design, were controlled for age, BMI, education, tobacco smoke exposure, and occupation. Results Among 5029 US women ≥ 25 years with complete data, earlier age at menopause was found among all smokers, and among service and manufacturing industry sector workers. Among women age 25-50 years, there was an increased risk of earlier age at menopause with both primary smoking and with SHS exposure, particularly among Black women. Conclusions Primary tobacco use and SHS exposure were associated with an increased odds of earlier age at menopause in a representative sample of US women. Earlier age at menopause was found for some women worker groups with greater potential occupational SHS exposure. Thus, control of SHS exposures in the workplace may decrease the risk of mortality and morbidity associated with earlier age at menopause in US women workers. PMID:18626414
Moulton, Haley; Tosteson, Tor D; Zhao, Wenyan; Pearson, Loretta; Mycek, Kristina; Scherer, Emily; Weinstein, James N; Pearson, Adam; Abdu, William; Schwarz, Susan; Kelly, Michael; McGuire, Kevin; Milam, Alden; Lurie, Jon D
2018-06-05
Prospective evaluation of an informational web-based calculator for communicating estimates of personalized treatment outcomes. To evaluate the usability, effectiveness in communicating benefits and risks, and impact on decision quality of a calculator tool for patients with intervertebral disc herniations, spinal stenosis, and degenerative spondylolisthesis who are deciding between surgical and non-surgical treatments. The decision to have back surgery is preference-sensitive and warrants shared decision-making. However, more patient-specific, individualized tools for presenting clinical evidence on treatment outcomes are needed. Using Spine Patient Outcomes Research Trial (SPORT) data, prediction models were designed and integrated into a web-based calculator tool: http://spinesurgerycalc.dartmouth.edu/calc/. Consumer Reports subscribers with back-related pain were invited to use the calculator via email, and patient participants were recruited to use the calculator in a prospective manner following an initial appointment at participating spine centers. Participants completed questionnaires before and after using the calculator. We randomly assigned previously validated questions that tested knowledge about the treatment options to be asked either before or after viewing the calculator. 1,256 Consumer Reports subscribers and 68 patient participants completed the calculator and questionnaires. Knowledge scores were higher in the post-calculator group compared to the pre-calculator group, indicating that calculator usage successfully informed users. Decisional conflict was lower when measured following calculator use, suggesting the calculator was beneficial in the decision-making process. Participants generally found the tool helpful and easy to use. While the calculator is not a comprehensive decision aid, it does focus on communicating individualized risks and benefits for treatment options. Moreover, it appears to be helpful in achieving the goals of more
The case for earlier cochlear implantation in postlingually deaf adults.
Dowell, Richard C
2016-01-01
This paper aimed to estimate the difference in speech perception outcomes that may occur due to timing of cochlear implantation in relation to the progression of hearing loss. Data from a large population-based sample of adults with acquired hearing loss using cochlear implants (CIs) was used to estimate the effects of duration of hearing loss, age, and pre-implant auditory skills on outcomes for a hypothetical standard patient. A total of 310 adults with acquired severe/profound bilateral hearing loss who received a CI in Melbourne, Australia between 1994 and 2006 provided the speech perception data and demographic information to derive regression equations for estimating CI outcomes. For a hypothetical CI candidate with progressive sensorineural hearing loss, the estimates of speech perception scores following cochlear implantation are significantly better if implantation occurs relatively soon after onset of severe hearing loss and before the loss of all functional auditory skills. Improved CI outcomes and quality of life benefit may be achieved for adults with progressive severe hearing loss if they are implanted earlier in the progression of the pathology.
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
View northeast, wharf A, portion AA, details showing earlier piers ...
View northeast, wharf A, portion AA, details showing earlier piers and braces sloping toward water, reused charred plates for existing decking - U.S. Coast Guard Sandy Hook Station, Western Docking Structure, West of intersection of Canfield Road & Hartshorne Drive, Highlands, Monmouth County, NJ
Prescription stimulant use is associated with earlier onset of psychosis.
Moran, Lauren V; Masters, Grace A; Pingali, Samira; Cohen, Bruce M; Liebson, Elizabeth; Rajarethinam, R P; Ongur, Dost
2015-12-01
A childhood history of attention deficit hyperactivity disorder (ADHD) is common in psychotic disorders, yet prescription stimulants may interact adversely with the physiology of these disorders. Specifically, exposure to stimulants leads to long-term increases in dopamine release. We therefore hypothesized that individuals with psychotic disorders previously exposed to prescription stimulants will have an earlier onset of psychosis. Age of onset of psychosis (AOP) was compared in individuals with and without prior exposure to prescription stimulants while controlling for potential confounding factors. In a sample of 205 patients recruited from an inpatient psychiatric unit, 40% (n = 82) reported use of stimulants prior to the onset of psychosis. Most participants were prescribed stimulants during childhood or adolescence for a diagnosis of ADHD. AOP was significantly earlier in those exposed to stimulants (20.5 vs. 24.6 years stimulants vs. no stimulants, p < 0.001). After controlling for gender, IQ, educational attainment, lifetime history of a cannabis use disorder or other drugs of abuse, and family history of a first-degree relative with psychosis, the association between stimulant exposure and earlier AOP remained significant. There was a significant gender × stimulant interaction with a greater reduction in AOP for females, whereas the smaller effect of stimulant use on AOP in males did not reach statistical significance. In conclusion, individuals with psychotic disorders exposed to prescription stimulants had an earlier onset of psychosis, and this relationship did not appear to be mediated by IQ or cannabis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Research promises earlier warning for grapevine canker diseases
USDA-ARS?s Scientific Manuscript database
When it comes to detecting and treating vineyards for grapevine canker diseases (also called trunk diseases), like Botryosphaeria dieback (Bot canker), Esca, Eutypa dieback and Phomopsis dieback, the earlier the better, says plant pathologist Kendra Baumgartner, with the USDA’s Agricultural Research...
Does Speech Emerge from Earlier Appearing Oral Motor Behaviors?.
ERIC Educational Resources Information Center
Moore, Christopher A.; Ruark, Jacki L.
1996-01-01
This study of the oral motor behaviors of seven toddlers (age 15 months) may be interpreted to indicate that: (1) mandibular coordination follows a developmental continuum from earlier emerging behaviors, such as chewing and sucking, through babbling, to speech, or (2) unique task demands give rise to distinct mandibular coordinative constraints…
Comprehensive methods for earlier detection and monitoring of forest decline
Jennifer Pontius; Richard Hallett
2014-01-01
Forested ecosystems are threatened by invasive pests, pathogens, and unusual climatic events brought about by climate change. Earlier detection of incipient forest health problems and a quantitatively rigorous assessment method is increasingly important. Here, we describe a method that is adaptable across tree species and stress agents and practical for use in the...
Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.
2010-01-01
Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550
Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin
2016-02-01
Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1 h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1 h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1 h -1 to 26.8 ml kg -1 h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
NASA Astrophysics Data System (ADS)
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Density functional theory calculations of III-N based semiconductors with mBJLDA
NASA Astrophysics Data System (ADS)
Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi
2017-02-01
In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.
Travelling for earlier surgical treatment: the patient's view.
Stewart, M; Donaldson, L J
1991-01-01
As part of the northern region's programme within the national waiting list initiative, schemes have been funded to test the feasibility and acceptability of offering patients the opportunity to travel further afield in order to receive earlier treatment. A total of 484 patients experiencing a long wait for routine surgical operations in the northern region were offered the opportunity to receive earlier treatment outside their local health district; 74% of the patients accepted the offer. The initiative was well received by the participating patients and the majority stated that if the need arose on a future occasion they would prefer to travel for treatment rather than have to wait for lengthy periods for treatment at their local hospital. These findings, interpreted in the light of the National Health Service reforms introduced in April 1991, suggest that for some types of care, patients would welcome greater flexibility in the placing of contracts, not merely reinforcement of historical patterns of referral. PMID:1823553
The effects of calculator-based laboratories on standardized test scores
NASA Astrophysics Data System (ADS)
Stevens, Charlotte Bethany Rains
Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee
Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services
Rajabi, A; Dabiri, A
2012-01-01
Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171
Colorectal cancer occurs earlier in those exposed to tobacco smoke: implications for screening
Mahoney, Martin C.; Cummings, K. Michael; Michalek, Arthur M.; Reid, Mary E.; Moysich, Kirsten B.; Hyland, Andrew
2011-01-01
Background Colorectal cancer (CRC) is the third most common cancer in the USA. While various lifestyle factors have been shown to alter the risk for colorectal cancer, recommendations for the early detection of CRC are based only on age and family history. Methods This case-only study examined the age at diagnosis of colorectal cancer in subjects exposed to tobacco smoke. Subjects included all patients who attended RPCI between 1957 and 1997, diagnosed with colorectal cancer, and completed an epidemiologic questionnaire. Adjusted linear regression models were calculated for the various smoking exposures. Results Of the 3,540 cases of colorectal cancer, current smokers demonstrated the youngest age of CRC onset (never: 64.2 vs. current: 57.4, P < 0.001) compared to never smokers, followed by recent former smokers. Among never smokers, individuals with past second-hand smoke exposure were diagnosed at a significantly younger age compared to the unexposed. Conclusion This study found that individuals with heavy, long-term tobacco smoke exposure were significantly younger at the time of CRC diagnosis compared to lifelong never smokers. The implication of this finding is that screening for colorectal cancer, which is recommended to begin at age 50 years for persons at average risk should be initiated 5–10 years earlier for persons with a significant lifetime history of exposure to tobacco smoke. PMID:18264728
Meirovitch, Hagai
2010-01-01
The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
NASA Astrophysics Data System (ADS)
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
NASA Astrophysics Data System (ADS)
Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin
2016-12-01
This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.
Earlier Violent Television Exposure and Later Drug Dependence
Brook, David W.; Katten, Naomi S.; Ning, Yuming; Brook, Judith S.
2013-01-01
This research examined the longitudinal pathways from earlier violent television exposure to later drug dependence. African American and Puerto Rican adolescents were interviewed during three points in time (N = 463). Violent television exposure in late adolescence predicted violent television exposure in young adulthood, which in turn was related to tobacco/marijuana use, nicotine dependence, and later drug dependence. Some policy and clinical implications suggest: a) regulating the times when violent television is broadcast; b) creating developmentally targeted prevention/treatment programs; and c) recognizing that watching violent television may serve as a cue regarding increased susceptibility to nicotine and drug dependence. PMID:18612881
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Calculation of Hugoniot properties for shocked nitromethane based on the improved Tsien's EOS
NASA Astrophysics Data System (ADS)
Zhao, Bo; Cui, Ji-Ping; Fan, Jing
2010-06-01
We have calculated the Hugoniot properties of shocked nitromethane based on the improved Tsien’s equation of state (EOS) that optimized by “exact” numerical molecular dynamic data at high temperatures and pressures. Comparison of the calculated results of the improved Tsien’s EOS with the existed experimental data and the direct simulations show that the behavior of the improved Tsien’s EOS is very good in many aspects. Because of its simple analytical form, the improved Tsien’s EOS can be prospectively used to study the condensed explosive detonation coupling with chemical reaction.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Discharging patients earlier in the day: a concept worth evaluating.
Kravet, Steven J; Levine, Rachel B; Rubin, Haya R; Wright, Scott M
2007-01-01
Patient discharges from the hospital often occur late in the day and are frequently clustered after 4 PM. When inpatients leave earlier in the day, quality is improved because new admissions awaiting beds are able to leave the emergency department sooner and emergency department waiting room backlog is reduced. Nursing staff, whose work patterns traditionally result in high activity of discharge and admission between 5 PM and 8 PM, benefit by spreading out their work across a longer part of the day. Discharging patients earlier in the day also has the potential to increase patient satisfaction. Despite multiple stakeholders in the discharge planning process, physicians play the most important role. Getting physician buy-in requires an ability to teach physicians about the concept of early-in-the-day discharges and their impact on the process. We defined a new physician-centered discharge planning process and introduced it to an internal medicine team with an identical control team as a comparison. Discharge time of day was analyzed for 1 month. Mean time of day of discharge was 13:39 for the intervention group versus 15:45 for the control group (P<.001). If reproduced successfully, this process could improve quality at an important transition point in patient care.
Changes toward earlier streamflow timing across western North America
Stewart, I.T.; Cayan, D.R.; Dettinger, M.D.
2005-01-01
The highly variable timing of streamflow in snowmelt-dominated basins across western North America is an important consequence, and indicator, of climate fluctuations. Changes in the timing of snowmelt-derived streamflow from 1948 to 2002 were investigated in a network of 302 western North America gauges by examining the center of mass for flow, spring pulse onset dates, and seasonal fractional flows through trend and principal component analyses. Statistical analysis of the streamflow timing measures with Pacific climate indicators identified local and key large-scale processes that govern the regionally coherent parts of the changes and their relative importance. Widespread and regionally coherent trends toward earlier onsets of springtime snowmelt and streamflow have taken place across most of western North America, affecting an area that is much larger than previously recognized. These timing changes have resulted in increasing fractions of annual flow occurring earlier in the water year by 1-4 weeks. The immediate (or proximal) forcings for the spatially coherent parts of the year-to-year fluctuations and longer-term trends of streamflow timing have been higher winter and spring temperatures. Although these temperature changes are partly controlled by the decadal-scale Pacific climate mode [Pacific decadal oscillation (PDO)], a separate and significant part of the variance is associated with a springtime warming trend that spans the PDO phases. ?? 2005 American Meteorological Society.
Earlier vegetation green-up has reduced spring dust storms
Fan, Bihang; Guo, Li; Li, Ning; Chen, Jin; Lin, Henry; Zhang, Xiaoyang; Shen, Miaogen; Rao, Yuhan; Wang, Cong; Ma, Lei
2014-01-01
The observed decline of spring dust storms in Northeast Asia since the 1950s has been attributed to surface wind stilling. However, spring vegetation growth could also restrain dust storms through accumulating aboveground biomass and increasing surface roughness. To investigate the impacts of vegetation spring growth on dust storms, we examine the relationships between recorded spring dust storm outbreaks and satellite-derived vegetation green-up date in Inner Mongolia, Northern China from 1982 to 2008. We find a significant dampening effect of advanced vegetation growth on spring dust storms (r = 0.49, p = 0.01), with a one-day earlier green-up date corresponding to a decrease in annual spring dust storm outbreaks by 3%. Moreover, the higher correlation (r = 0.55, p < 0.01) between green-up date and dust storm outbreak ratio (the ratio of dust storm outbreaks to times of strong wind events) indicates that such effect is independent of changes in surface wind. Spatially, a negative correlation is detected between areas with advanced green-up dates and regional annual spring dust storms (r = −0.49, p = 0.01). This new insight is valuable for understanding dust storms dynamics under the changing climate. Our findings suggest that dust storms in Inner Mongolia will be further mitigated by the projected earlier vegetation green-up in the warming world. PMID:25343265
Earlier vegetation green-up has reduced spring dust storms.
Fan, Bihang; Guo, Li; Li, Ning; Chen, Jin; Lin, Henry; Zhang, Xiaoyang; Shen, Miaogen; Rao, Yuhan; Wang, Cong; Ma, Lei
2014-10-24
The observed decline of spring dust storms in Northeast Asia since the 1950s has been attributed to surface wind stilling. However, spring vegetation growth could also restrain dust storms through accumulating aboveground biomass and increasing surface roughness. To investigate the impacts of vegetation spring growth on dust storms, we examine the relationships between recorded spring dust storm outbreaks and satellite-derived vegetation green-up date in Inner Mongolia, Northern China from 1982 to 2008. We find a significant dampening effect of advanced vegetation growth on spring dust storms (r = 0.49, p = 0.01), with a one-day earlier green-up date corresponding to a decrease in annual spring dust storm outbreaks by 3%. Moreover, the higher correlation (r = 0.55, p < 0.01) between green-up date and dust storm outbreak ratio (the ratio of dust storm outbreaks to times of strong wind events) indicates that such effect is independent of changes in surface wind. Spatially, a negative correlation is detected between areas with advanced green-up dates and regional annual spring dust storms (r = -0.49, p = 0.01). This new insight is valuable for understanding dust storms dynamics under the changing climate. Our findings suggest that dust storms in Inner Mongolia will be further mitigated by the projected earlier vegetation green-up in the warming world.
A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery.
Shirk, Steven D; Mitchell, Meghan B; Shaughnessy, Lynn W; Sherman, Janet C; Locascio, Joseph J; Weintraub, Sandra; Atri, Alireza
2011-11-11
With the recent publication of new criteria for the diagnosis of preclinical Alzheimer's disease (AD), there is a need for neuropsychological tools that take premorbid functioning into account in order to detect subtle cognitive decline. Using demographic adjustments is one method for increasing the sensitivity of commonly used measures. We sought to provide a useful online z-score calculator that yields estimates of percentile ranges and adjusts individual performance based on sex, age and/or education for each of the neuropsychological tests of the National Alzheimer's Coordinating Center Uniform Data Set (NACC, UDS). In addition, we aimed to provide an easily accessible method of creating norms for other clinical researchers for their own, unique data sets. Data from 3,268 clinically cognitively-normal older UDS subjects from a cohort reported by Weintraub and colleagues (2009) were included. For all neuropsychological tests, z-scores were estimated by subtracting the raw score from the predicted mean and then dividing this difference score by the root mean squared error term (RMSE) for a given linear regression model. For each neuropsychological test, an estimated z-score was calculated for any raw score based on five different models that adjust for the demographic predictors of SEX, AGE and EDUCATION, either concurrently, individually or without covariates. The interactive online calculator allows the entry of a raw score and provides five corresponding estimated z-scores based on predictions from each corresponding linear regression model. The calculator produces percentile ranks and graphical output. An interactive, regression-based, normative score online calculator was created to serve as an additional resource for UDS clinical researchers, especially in guiding interpretation of individual performances that appear to fall in borderline realms and may be of particular utility for operationalizing subtle cognitive impairment present according to the newly
NASA Astrophysics Data System (ADS)
He, Yuping
2015-03-01
We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.
TrackEtching - A Java based code for etched track profile calculations in SSNTDs
NASA Astrophysics Data System (ADS)
Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.
2017-09-01
A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.
Calculation of the surface tension of liquid Ga-based alloys
NASA Astrophysics Data System (ADS)
Dogan, Ali; Arslan, Hüseyin
2018-05-01
As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.
2014-01-01
The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044
Trend of earlier spring in central Europe continued
NASA Astrophysics Data System (ADS)
Ungersböck, Markus; Jurkovic, Anita; Koch, Elisabeth; Lipa, Wolfgang; Scheifinger, Helfried; Zach-Hermann, Susanne
2013-04-01
Modern phenology is the study of the timing of recurring biological events in the animal and plant world, the causes of their timing with regard to biotic and abiotic forces, and the interrelation among phases of the same or different species. The relationship between phenology and climate explains the importance of plant phenology for Climate Change studies. Plants require light, water, oxygen mineral nutrients and suitable temperature to grow. In temperate zones the seasonal life cycle of plants is primarily controlled by temperature and day length. Higher spring air temperatures are resulting in an earlier onset of the phenological spring in temperate and cool climate. On the other hand changes in phenology due to climate change do have impact on the climate system itself. Vegetation is a dynamic factor in the earth - climate system and has positive and negative feedback mechanisms to the biogeochemical and biogeophysical fluxes to the atmosphere Since the mid of the 1980s spring springs earlier in Europe and autumn is shifting back to the end of the year resulting in a longer vegetation period. The advancement of spring can be clearly attributed to temperature increase in the months prior to leaf unfolding and flowering, the timing of autumn is more complex and cannot easily be attributed to one or some few parameters. To demonstrate that the observed advancement of spring since the mid of 1980s is pro-longed in 2001 to 2010 and the delay of fall and the lengthening of the growing season is confirmed in the last decade we picked out several indicator plants from the PEP725 database www.pep725.eu. The PEP725 database collects data from different European network operators and thus offers a unique compilation of phenological observations; the database is regularly updated. The data follow the same classification scheme, the so called BBCH coding system so they can be compared. Lilac Syringa vulgaris, birch Betula pendula, beech Fagus and horse chestnut Aesculus
NASA Astrophysics Data System (ADS)
Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin
2018-05-01
Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.
NASA Astrophysics Data System (ADS)
Fang, G. J.; Bao, H.
2017-12-01
The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.
Development of a web-based CT dose calculator: WAZA-ARI.
Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M
2011-09-01
A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.
GPU-based ultra-fast dose calculation using a finite size pencil beam model.
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-10-21
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Ertl, P
1998-02-01
Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.
1993-06-03
personal communication , Institute of Marine Research, Bergen, Norway (1990). fishery sense although they could be major contributors to 7"Report of the...volume scattering strength data with model calculations based or, Program Element No. quasisynoptically collected fishery data Pjfect No. 6. Author(s...and 5000 Hz in the Norwegian Sea in August 1988 and west of Great Britain in April 1989. Coincidentally, extensive fishery surveys were conducted at
Research on trust calculation of wireless sensor networks based on time segmentation
NASA Astrophysics Data System (ADS)
Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin
2017-05-01
Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network
NASA Astrophysics Data System (ADS)
Olson, John R.
This is a quasi-experimental study of 261 first year high school students that analyzes gains made through the use of calculator based rangers attached to calculators. The study has qualitative components but is based on quantitative tests. Biechner's TUG-K test was used for the pretest, posttest, and post-posttest. The population was divided into one group that predicted the results before using the CBRs and another that did not predict first but completed the same activities. The data for the groups was further disaggregated into learning style groups (based on Kolb's Learning Styles Inventory), type of class (advanced vs. general physics), and gender. Four instructors used the labs developed by the author for this study and created significant differences between the groups by instructor based on interviews, participant observation and one way ANOVA. No significant differences were found between learning styles based on MANOVA. No significant differences were found between predict and nonpredict groups for the one way ANOVAs or MANOVA, however, some differences do exist as measured by a survey and participant observation. Significant differences do exist between gender and type of class (advanced/general) based on one way ANOVA and MANOVA. The males outscored the females on all tests and the advanced physics scored higher than the general physics on all tests. The advanced physics scoring higher was expected but the difference between genders was not.
An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet
NASA Astrophysics Data System (ADS)
Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon
2015-08-01
The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.
Carlsson Tedgren, A; Persson, M; Nilsson, J
Purpose: To retrospectively re-calculate dose distributions for selected head and neck cancer patients, earlier treated with HDR 192Ir brachytherapy, using Monte Carlo (MC) simulations and compare results to distributions from the planning system derived using TG43 formalism. To study differences between dose to medium (as obtained with the MC code) and dose to water in medium as obtained through (1) ratios of stopping powers and (2) ratios of mass energy absorption coefficients between water and medium. Methods: The MC code Algebra was used to calculate dose distributions according to earlier actual treatment plans using anonymized plan data and CT imagesmore » in DICOM format. Ratios of stopping power and mass energy absorption coefficients for water with various media obtained from 192-Ir spectra were used in toggling between dose to water and dose to media. Results: Differences between initial planned TG43 dose distributions and the doses to media calculated by MC are insignificant in the target volume. Differences are moderate (within 4–5 % at distances of 3–4 cm) but increase with distance and are most notable in bone and at the patient surface. Differences between dose to water and dose to medium are within 1-2% when using mass energy absorption coefficients to toggle between the two quantities but increase to above 10% for bone using stopping power ratios. Conclusion: MC predicts target doses for head and neck cancer patients in close agreement with TG43. MC yields improved dose estimations outside the target where a larger fraction of dose is from scattered photons. It is important with awareness and a clear reporting of absorbed dose values in using model based algorithms. Differences in bone media can exceed 10% depending on how dose to water in medium is defined.« less
Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT
NASA Astrophysics Data System (ADS)
Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.
2017-02-01
Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in
None, None
2015-09-28
Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less
Reisner, Jon Michael; D'Angelo, Gennaro; Koo, Eunmo; ...
2018-02-13
In this paper, we present a multi-scale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock et al. (2007a), based on the analysis of Toon et al. (2007), which assumes a regional exchange between India and Pakistan of fifty 15-kiloton weapons detonated by each side. Wemore » expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 10 9 kg of black carbon in the upper troposphere (approximately 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 10 9 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures are not uniform and are concentrated primarily around the highest arctic
NASA Astrophysics Data System (ADS)
Reisner, Jon; D'Angelo, Gennaro; Koo, Eunmo; Even, Wesley; Hecht, Matthew; Hunke, Elizabeth; Comeau, Darin; Bos, Randall; Cooley, James
2018-03-01
We present a multiscale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock, Oman, Stenchikov, et al. (2007, https://doi.org/10.5194/acp-7-2003-2007), based on the analysis of Toon et al. (2007, https://doi.org/10.5194/acp-7-1973-2007), which assumes a regional exchange between India and Pakistan of fifty 15 kt weapons detonated by each side. We expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 109 kg of black carbon in the upper troposphere (approximately from 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 109 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures
Reisner, Jon Michael; D'Angelo, Gennaro; Koo, Eunmo
In this paper, we present a multi-scale study examining the impact of a regional exchange of nuclear weapons on global climate. Our models investigate multiple phases of the effects of nuclear weapons usage, including growth and rise of the nuclear fireball, ignition and spread of the induced firestorm, and comprehensive Earth system modeling of the oceans, land, ice, and atmosphere. This study follows from the scenario originally envisioned by Robock et al. (2007a), based on the analysis of Toon et al. (2007), which assumes a regional exchange between India and Pakistan of fifty 15-kiloton weapons detonated by each side. Wemore » expand this scenario by modeling the processes that lead to production of black carbon, in order to refine the black carbon forcing estimates of these previous studies. When the Earth system model is initiated with 5 × 10 9 kg of black carbon in the upper troposphere (approximately 9 to 13 km), the impact on climate variables such as global temperature and precipitation in our simulations is similar to that predicted by previously published work. However, while our thorough simulations of the firestorm produce about 3.7 × 10 9 kg of black carbon, we find that the vast majority of the black carbon never reaches an altitude above weather systems (approximately 12 km). Therefore, our Earth system model simulations conducted with model-informed atmospheric distributions of black carbon produce significantly lower global climatic impacts than assessed in prior studies, as the carbon at lower altitudes is more quickly removed from the atmosphere. In addition, our model ensembles indicate that statistically significant effects on global surface temperatures are limited to the first 5 years and are much smaller in magnitude than those shown in earlier works. None of the simulations produced a nuclear winter effect. We find that the effects on global surface temperatures are not uniform and are concentrated primarily around the highest arctic
A Web-based interface to calculate phonotactic probability for words and nonwords in English
VITEVITCH, MICHAEL S.; LUCE, PAUL A.
2008-01-01
Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html. PMID:15641436
NASA Astrophysics Data System (ADS)
Shevenell, Lisa
1999-03-01
Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics
Park, Peter C.; Schreibmann, Eduard; Roper, Justin
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less
Miliordos, Evangelos; Xantheas, Sotiris S.
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
New approach to CT pixel-based photon dose calculations in heterogeneous media
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
Detection, attribution, and sensitivity of trends toward earlier streamflow in the Sierra Nevada
Maurer, E.P.; Stewart, I.T.; Bonfils, Celine; Duffy, P.B.; Cayan, D.
2007-01-01
Observed changes in the timing of snowmelt dominated streamflow in the western United States are often linked to anthropogenic or other external causes. We assess whether observed streamflow timing changes can be statistically attributed to external forcing, or whether they still lie within the bounds of natural (internal) variability for four large Sierra Nevada (CA) basins, at inflow points to major reservoirs. Streamflow timing is measured by "center timing" (CT), the day when half the annual flow has passed a given point. We use a physically based hydrology model driven by meteorological input from a global climate model to quantify the natural variability in CT trends. Estimated 50-year trends in CT due to natural climate variability often exceed estimated actual CT trends from 1950 to 1999. Thus, although observed trends in CT to date may be statistically significant, they cannot yet be statistically attributed to external influences on climate. We estimate that projected CT changes at the four major reservoir inflows will, with 90% confidence, exceed those from natural variability within 1-4 decades or 4-8 decades, depending on rates of future greenhouse gas emissions. To identify areas most likely to exhibit CT changes in response to rising temperatures, we calculate changes in CT under temperature increases from 1 to 5??. We find that areas with average winter temperatures between -2??C and -4??C are most likely to respond with significant CT shifts. Correspondingly, elevations from 2000 to 2800 in are most sensitive to temperature increases, with CT changes exceeding 45 days (earlier) relative to 1961-1990. Copyright 2007 by the American Geophysical Union.
Morrison, Hali; Menon, Geetha; Sloboda, Ron
Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque centralmore » axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.« less
Target Water Consumption Calculation for Human Water Management based on Water Balance
NASA Astrophysics Data System (ADS)
Sang, X.; Zhai, Z.; Ye, Y.; Zhai, J.
2016-12-01
Degradation of the regional ecological environment has become increasingly serious due to the rapid increase of water usage. Critical to water consumption management is a good approach to control the growth of water usage. Through the identification and analysis of water consumption for various sectors in the hydrosocial cycle, the method for calculating the regional target water consumption also is derived based on water balance theory. Analysis shows that during 1980 - 2004 in Tianjin City, there were 22 years in which the actual water consumption of Tianjin exceeded its target water consumption, with an average excess of 66 million m3 annually. Moreover, calculations show that the maximum human target water consumption water supply is 1.91 billion m3/a. If water consumption is controlled according to the target, the sustainable development of water resource, economic and social growth, and ecological environment in this region can be expected to be achieved.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
A Brief User's Guide to the Excel ® -Based DF Calculator
Jubin, Robert T.
2016-06-01
To understand the importance of capturing penetrating forms of iodine as well as the other volatile radionuclides, a calculation tool was developed in the form of an Excel ® spreadsheet to estimate the overall plant decontamination factor (DF). The tool requires the user to estimate splits of the volatile radionuclides within the major portions of the reprocessing plant, speciation of iodine and individual DFs for each off-gas stream within the Used Nuclear Fuel reprocessing plant. The Impact to the overall plant DF for each volatile radionuclide is then calculated by the tool based on the specific user choices. The Excelmore » ® spreadsheet tracks both elemental and penetrating forms of iodine separately and allows changes in the speciation of iodine at each processing step. It also tracks 3H, 14C and 85Kr. This document provides a basic user's guide to the manipulation of this tool.« less
Activity-based costing: a practical model for cost calculation in radiotherapy.
Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien
2003-10-01
The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.
Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.
Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R
2000-07-01
A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.
Hemodynamic parameters change earlier than tissue oxygen tension in hemorrhage.
Pestel, Gunther J; Fukui, Kimiko; Kimberger, Oliver; Hager, Helmut; Kurz, Andrea; Hiltebrand, Luzius B
2010-05-15
Untreated hypovolemia results in impaired outcome. This study tests our hypothesis whether general hemodynamic parameters detect acute blood loss earlier than monitoring parameters of regional tissue beds. Eight pigs (23-25 kg) were anesthetized and mechanically ventilated. A pulmonary artery catheter and an arterial catheter were inserted. Tissue oxygen tension was measured with Clark-type electrodes in the jejunal and colonic wall, in the liver, and subcutaneously. Jejunal microcirculation was assessed by laser Doppler flowmetry (LDF). Intravascular volume was optimized using difference in pulse pressure (dPP) to keep dPP below 13%. Sixty minutes after preparation, baseline measurements were taken. At first, 5% of total blood volume was withdrawn, followed by another 5% increment, and then in 10% increments until death. After withdrawal of 5% of estimated blood volume, dPP increased from 6.1% +/- 3.0% to 20.8% +/- 2.7% (P < 0.01). Mean arterial pressure (MAP), mean pulmonary artery pressure (PAP) and pulmonary artery occlusion pressure (PAOP) decreased with a blood loss of 10% (P < 0.01). Cardiac output (CO) changed after a blood loss of 20% (P < 0.05). Tissue oxygen tension in central organs, and blood flow in the jejunal muscularis decreased (P < 0.05) after a blood loss of 20%. Tissue oxygen tension in the skin, and jejunal mucosa blood flow decreased (P < 0.05) after a blood loss of 40% and 50%, respectively. In this hemorrhagic pig model systemic hemodynamic parameters were more sensitive to detect acute hypovolemia than tissue oxygen tension measurements or jejunal LDF measurements. Acute blood loss was detected first by dPP. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation
Sanuki, T.; Honda, M.; Kajita, T.
2007-02-15
We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the {pi}-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The {mu}{sup +}+{mu}{sup -} data show good agreement in the 1{approx}30 GeV/c range, but a large disagreement above 30 GeV/c. The {mu}{sup +}/{mu}{sup -} ratiomore » shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).].« less
A vantage from space can detect earlier drought onset: an approach using relative humidity.
Farahmand, Alireza; AghaKouchak, Amir; Teixeira, Joao
2015-02-25
Each year, droughts cause significant economic and agricultural losses across the world. The early warning and onset detection of drought is of particular importance for effective agriculture and water resource management. Previous studies show that the Standard Precipitation Index (SPI), a measure of precipitation deficit, detects drought onset earlier than other indicators. Here we show that satellite-based near surface air relative humidity data can further improve drought onset detection and early warning. This paper introduces the Standardized Relative Humidity Index (SRHI) based on the NASA Atmospheric Infrared Sounder (AIRS) observations. The results indicate that the SRHI typically detects the drought onset earlier than the SPI. While the AIRS mission was not originally designed for drought monitoring, we show that its relative humidity data offers a new and unique avenue for drought monitoring and early warning. We conclude that the early warning aspects of SRHI may have merit for integration into current drought monitoring systems.
A Vantage from Space Can Detect Earlier Drought Onset: An Approach Using Relative Humidity
Farahmand, Alireza; AghaKouchak, Amir; Teixeira, Joao
2015-01-01
Each year, droughts cause significant economic and agricultural losses across the world. The early warning and onset detection of drought is of particular importance for effective agriculture and water resource management. Previous studies show that the Standard Precipitation Index (SPI), a measure of precipitation deficit, detects drought onset earlier than other indicators. Here we show that satellite-based near surface air relative humidity data can further improve drought onset detection and early warning. This paper introduces the Standardized Relative Humidity Index (SRHI) based on the NASA Atmospheric Infrared Sounder (AIRS) observations. The results indicate that the SRHI typically detects the drought onset earlier than the SPI. While the AIRS mission was not originally designed for drought monitoring, we show that its relative humidity data offers a new and unique avenue for drought monitoring and early warning. We conclude that the early warning aspects of SRHI may have merit for integration into current drought monitoring systems. PMID:25711500
Facilitating earlier transfer of care from acute stroke services into the community.
Robinson, Jennifer
This article outlines an initiative to reduce length of stay for stroke patients within an acute hospital and to facilitate earlier transfer of care. Existing care provision was remodelled and expanded to deliver stroke care to patients within a community bed-based intermediate care facility or intermediate care at home. This new model of care has improved the delivery of rehabilitation through alternative and innovative ways of addressing service delivery that meet the needs of the patients.
NASA Astrophysics Data System (ADS)
Feng, Yefeng; Wu, Qin; Hu, Jianbing; Xu, Zhichao; Peng, Cheng; Xia, Zexu
2018-03-01
Interface induced polarization has a significant impact on permittivity of 0–3 type polymer composites with Si based semi-conducting fillers. Polarity of Si based filler, polarity of polymer matrix and grain size of filler are closely connected with induced polarization and permittivity of composites. However, unlike 2–2 type composites, the real permittivity of Si based fillers in 0–3 type composites could be not directly measured. Therefore, achieving the theoretical permittivity of fillers in 0–3 composites through effective medium approximation (EMA) models should be very necessary. In this work, the real permittivity results of Si based semi-conducting fillers in ten different 0–3 polymer composite systems were calculated by linear fitting of simplified EMA models, based on particularity of reported parameters in those composites. The results further confirmed the proposed interface induced polarization. The results further verified significant influences of filler polarity, polymer polarity and filler size on induced polarization and permittivity of composites as well. High self-consistency was gained between present modelling and prior measuring. This work might offer a facile and effective route to achieve the difficultly measured dielectric performances of discrete filler phase in some special polymer based composite systems.
Strong ion calculator--a practical bedside application of modern quantitative acid-base physiology.
Lloyd, P
2004-12-01
To review acid-base balance by considering the physical effects of ions in solution and describe the use of a calculator to derive the strong ion difference and Atot and strong ion gap. A review of articles reporting on the use of strong ion difference and Atot in the interpretation of acid base balance. Tremendous progress has been made in the last decade in our understanding of acid-base physiology. We now have a quantitative understanding of the mechanisms underlying the acidity of an aqueous solution. We can now predict the acidity given information about the concentration of the various ion-forming species within it. We can predict changes in acid-base status caused by disturbance of these factors, and finally, we can detect unmeasured anions with greater sensitivity than was previously possible with the anion gap, using either arterial or venous blood sampling. Acid-base interpretation has ceased to be an intuitive and arcane art. Much of it is now an exact computation that can be automated and incorporated into an online hospital laboratory information system. All diseases and all therapies can affect a patient's acid-base status only through the final common pathway of one or more of the three independent factors. With Constable's equations we can now accurately predict the acidity of plasma. When there is a discrepancy between the observed and predicted acidity we can deduce the net concentration of unmeasured ions to account for the difference.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y).
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
Structure and Magnetic Properties in Ruthenium-Based Full-Heusler Alloys: AB INITIO Calculations
NASA Astrophysics Data System (ADS)
Bahlouli, S.; Aarizou, Z.; Elchikh, M.
2013-12-01
In this paper, we present ab initio calculations within density functional theory (DFT) to investigate structure, electronic and magnetic properties of Ru2CrZ (Z = Si, Ge and Sn) full-Heusler alloys. We have used the developed full-potential linearized muffin tin orbitals (FP-LMTO) based on the local spin density approximation (LSDA) with the PLane Wave expansion (PLW). In particular, we found that these Ruthenium-based Heusler alloys have the antiferromagnetic (AFM) type II as ground state. Then, we studied and discussed the magnetic properties belonging to our different magnetic structures: AFM type II, AFM type I and ferromagnetic (FM) phase. We also found that Ru2CrSi and Ru2CrGe exhibit a semiconducting behavior whereas Ru2CrSn has a semimetallic-like behavior as it is experimentally found. We made an estimation of Néel temperatures (TN) in the framework of the mean-field theory and used the energy differences approach to deduce the relevant short-range nearest-neighbor (J1) and next-nearest-neighbor (J2) interactions. The calculated TN are somewhat overestimated to the available experimental ones.
BaTiO3-based nanolayers and nanotubes: first-principles calculations.
Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D
2013-01-30
The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter. Copyright © 2012 Wiley Periodicals, Inc.
Slimani, Faiçal A A; Hamdi, Mahdjoub; Bentourkia, M'hamed
2018-05-01
Monte Carlo (MC) simulation is widely recognized as an important technique to study the physics of particle interactions in nuclear medicine and radiation therapy. There are different codes dedicated to dosimetry applications and widely used today in research or in clinical application, such as MCNP, EGSnrc and Geant4. However, such codes made the physics easier but the programming remains a tedious task even for physicists familiar with computer programming. In this paper we report the development of a new interface GEANT4 Dose And Radiation Interactions (G4DARI) based on GEANT4 for absorbed dose calculation and for particle tracking in humans, small animals and complex phantoms. The calculation of the absorbed dose is performed based on 3D CT human or animal images in DICOM format, from images of phantoms or from solid volumes which can be made from any pure or composite material to be specified by its molecular formula. G4DARI offers menus to the user and tabs to be filled with values or chemical formulas. The interface is described and as application, we show results obtained in a lung tumor in a digital mouse irradiated with seven energy beams, and in a patient with glioblastoma irradiated with five photon beams. In conclusion, G4DARI can be easily used by any researcher without the need to be familiar with computer programming, and it will be freely available as an application package. Copyright © 2018 Elsevier Ltd. All rights reserved.
Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector
NASA Astrophysics Data System (ADS)
Eleon, C.; Perot, B.; Carasco, C.
2010-07-01
The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Quasiparticle properties of DNA bases from GW calculations in a Wannier basis
NASA Astrophysics Data System (ADS)
Qian, Xiaofeng; Marzari, Nicola; Umari, Paolo
2009-03-01
The quasiparticle GW-Wannier (GWW) approach [1] has been recently developed to overcome the size limitations of conventional planewave GW calculations. By taking advantage of the localization properties of the maximally-localized Wannier functions and choosing a small set of polarization basis we reduce the number of Bloch wavefunctions products required for the evaluation of dynamical polarizabilities, and in turn greatly reduce memory requirements and computational efficiency. We apply GWW to study quasiparticle properties of different DNA bases and base-pairs, and solvation effects on the energy gap, demonstrating in the process the key advantages of this approach. [1] P. Umari,G. Stenuit, and S. Baroni, cond-mat/0811.1453
Dahlgren, Björn; Reif, Maria M; Hünenberger, Philippe H; Hansen, Niels
2012-10-09
The raw ionic solvation free energies calculated on the basis of atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [Kastenholz, M. A.; Hünenberger, P. H. J. Chem. Phys.2006, 124, 224501 and Reif, M. M.; Hünenberger, P. H. J. Chem. Phys.2011, 134, 144104], the application of an appropriate correction scheme allows for a conversion of the methodology-dependent raw data into methodology-independent results. In this work, methodology-independent derivative thermodynamic hydration and aqueous partial molar properties are calculated for the Na(+) and Cl(-) ions at P° = 1 bar and T(-) = 298.15 K, based on the SPC water model and on ion-solvent Lennard-Jones interaction coefficients previously reoptimized against experimental hydration free energies. The hydration parameters considered are the hydration free energy and enthalpy. The aqueous partial molar parameters considered are the partial molar entropy, volume, heat capacity, volume-compressibility, and volume-expansivity. Two alternative calculation methods are employed to access these properties. Method I relies on the difference in average volume and energy between two aqueous systems involving the same number of water molecules, either in the absence or in the presence of the ion, along with variations of these differences corresponding to finite pressure or/and temperature changes. Method II relies on the calculation of the hydration free energy of the ion, along with variations of this free energy corresponding to finite pressure or/and temperature changes. Both methods are used considering two distinct variants in the application of the correction scheme. In variant A, the raw values from the simulations are corrected after the application of finite difference in pressure or/and temperature, based on correction terms specifically designed for derivative parameters at
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Wang, Junmei; Hou, Tingjun
2012-01-01
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes
Compulsive buying: Earlier illicit drug use, impulse buying, depression, and adult ADHD symptoms.
Brook, Judith S; Zhang, Chenshu; Brook, David W; Leukefeld, Carl G
2015-08-30
This longitudinal study examined the association between psychosocial antecedents, including illicit drug use, and adult compulsive buying (CB) across a 29-year time period from mean age 14 to mean age 43. Participants originally came from a community-based random sample of residents in two upstate New York counties. Multivariate linear regression analysis was used to study the relationship between the participant's earlier psychosocial antecedents and adult CB in the fifth decade of life. The results of the multivariate linear regression analyses showed that gender (female), earlier adult impulse buying (IB), depressive mood, illicit drug use, and concurrent ADHD symptoms were all significantly associated with adult CB at mean age 43. It is important that clinicians treating CB in adults should consider the role of drug use, symptoms of ADHD, IB, depression, and family factors in CB. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Compulsive Buying: Earlier Illicit Drug Use, Impulse Buying, Depression, and Adult ADHD Symptoms
Brook, Judith S.; Zhang, Chenshu; Brook, David W.; Leukefeld, Carl G.
2015-01-01
This longitudinal study examined the association between psychosocial antecedents, including illicit drug use, and adult compulsive buying (CB) across a 29-year time period from mean age 14 to mean age 43. Participants originally came from a community-based random sample of residents in two upstate New York counties. Multivariate linear regression analysis was used to study the relationship between the participant’s earlier psychosocial antecedents and adult CB in the fifth decade of life. The results of the multivariate linear regression analyses showed that gender (female), earlier adult impulse buying (IB), depressive mood, illicit drug use, and concurrent ADHD symptoms were all significantly associated with adult CB at mean age 43. It is important that clinicians treating CB in adults should consider the role of drug use, symptoms of ADHD, IB, depression, and family factors in CB. PMID:26165963
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.
Martinez-Rovira, I; Sempau, J; Prezado, Y
2012-05-01
Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Martinez-Rovira, I.; Sempau, J.; Prezado, Y.
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct featuresmore » of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved
Liu, Miao; Rong, Ziqin; Malik, Rahul; ...
2014-12-16
In this study, batteries that shuttle multivalent ions such as Mg 2+ and Ca 2+ ions are promising candidates for achieving higher energy density than available with current Li-ion technology. Finding electrode materials that reversibly store and release these multivalent cations is considered a major challenge for enabling such multivalent battery technology. In this paper, we use recent advances in high-throughput first-principles calculations to systematically evaluate the performance of compounds with the spinel structure as multivalent intercalation cathode materials, spanning a matrix of five different intercalating ions and seven transition metal redox active cations. We estimate the insertion voltage, capacity,more » thermodynamic stability of charged and discharged states, as well as the intercalating ion mobility and use these properties to evaluate promising directions. Our calculations indicate that the Mn 2O 4 spinel phase based on Mg and Ca are feasible cathode materials. In general, we find that multivalent cathodes exhibit lower voltages compared to Li cathodes; the voltages of Ca spinels are ~0.2 V higher than those of Mg compounds (versus their corresponding metals), and the voltages of Mg compounds are ~1.4 V higher than Zn compounds; consequently, Ca and Mg spinels exhibit the highest energy densities amongst all the multivalent cation species. The activation barrier for the Al³⁺ ion migration in the Mn₂O₄ spinel is very high (~1400 meV for Al 3+ in the dilute limit); thus, the use of an Al based Mn spinel intercalation cathode is unlikely. Amongst the choice of transition metals, Mn-based spinel structures rank highest when balancing all the considered properties.« less
Xie, Bing; Nguyen, Trung Hai; Minh, David D. L.
2017-01-01
We demonstrate the feasibility of estimating protein-ligand binding free energies using multiple rigid receptor configurations. Based on T4 lysozyme snapshots extracted from six alchemical binding free energy calculations with a flexible receptor, binding free energies were estimated for a total of 141 ligands. For 24 ligands, the calculations reproduced flexible-receptor estimates with a correlation coefficient of 0.90 and a root mean square error of 1.59 kcal/mol. The accuracy of calculations based on Poisson-Boltzmann/Surface Area implicit solvent was comparable to previously reported free energy calculations. PMID:28430432
How is Version 6 different than earlier versions?
Atmospheric Science Data Center
2015-10-28
... integrated a priori CO profile. Second, the diagnostic 'Water Vapor Climatology Content' has been deleted. This diagnostic was included in previous products because of a data quality issue with the NCEP water vapor profiles. MERRA-based water vapor ...
Is Earth-based scaling a valid procedure for calculating heat flows for Mars?
NASA Astrophysics Data System (ADS)
Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle
2013-09-01
Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2016-01-01
Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974
Acceptance and commissioning of a treatment planning system based on Monte Carlo calculations.
Lopez-Tarjuelo, J; Garcia-Molla, R; Juan-Senabre, X J; Quiros-Higueras, J D; Santos-Serra, A; de Marco-Blancas, N; Calzada-Feliu, S
2014-04-01
The Monaco Treatment Planning System (TPS), based on a virtual energy fluence model of the photon beam head components of the linac and a dose computation engine made with Monte Carlo (MC) algorithm X-Ray Voxel MC (XVMC), has been tested before being put into clinical use. An Elekta Synergy with 6 MV was characterized using routine equipment. After the machine's model was installed, a set of functionality, geometric, dosimetric and data transfer tests were performed. The dosimetric tests included dose calculations in water, heterogeneous phantoms and Intensity Modulated Radiation Therapy (IMRT) verifications. Data transfer tests were run for every imaging device, TPS and the electronic medical record linked to Monaco. Functionality and geometric tests were run properly. Dose calculations in water were in accordance with measurements so that, in 95% of cases, differences were up to 1.9%. Dose calculation in heterogeneous media showed expected results found in the literature. IMRT verification results with an ionization chamber led to dose differences lower than 2.5% for points inside a standard gradient. When an 2-D array was used, all the fields passed the g (3%, 3 mm) test with a percentage of succeeding points between 90% and 95%, of which the majority of the mentioned fields had a percentage of succeeding points between 95% and 100%. Data transfer caused problems that had to be solved by means of changing our workflow. In general, tests led to satisfactory results. Monaco performance complied with published international recommendations and scored highly in the dosimetric ambit. However, the problems detected when the TPS was put to work together with our current equipment showed that this kind of product must be completely commissioned, without neglecting data workflow, before treating the first patient.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2015-05-17
Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better
Reproducibility measurements of three methods for calculating in vivo MR-based knee kinematics.
Lansdown, Drew A; Zaid, Musa; Pedoia, Valentina; Subburaj, Karupppasamy; Souza, Richard; Benjamin, C; Li, Xiaojuan
2015-08-01
To describe three quantification methods for magnetic resonance imaging (MRI)-based knee kinematic evaluation and to report on the reproducibility of these algorithms. T2 -weighted, fast-spin echo images were obtained of the bilateral knees in six healthy volunteers. Scans were repeated for each knee after repositioning to evaluate protocol reproducibility. Semiautomatic segmentation defined regions of interest for the tibia and femur. The posterior femoral condyles and diaphyseal axes were defined using the previously defined tibia and femur. All segmentation was performed twice to evaluate segmentation reliability. Anterior tibial translation (ATT) and internal tibial rotation (ITR) were calculated using three methods: a tibial-based registration system, a combined tibiofemoral-based registration method with all manual segmentation, and a combined tibiofemoral-based registration method with automatic definition of condyles and axes. Intraclass correlation coefficients and standard deviations across multiple measures were determined. Reproducibility of segmentation was excellent (ATT = 0.98; ITR = 0.99) for both combined methods. ATT and ITR measurements were also reproducible across multiple scans in the combined registration measurements with manual (ATT = 0.94; ITR = 0.94) or automatic (ATT = 0.95; ITR = 0.94) condyles and axes. The combined tibiofemoral registration with automatic definition of the posterior femoral condyle and diaphyseal axes allows for improved knee kinematics quantification with excellent in vivo reproducibility. © 2014 Wiley Periodicals, Inc.
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
NASA Astrophysics Data System (ADS)
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Radiation doses and neutron irridation effects on human cells based on calculations
NASA Astrophysics Data System (ADS)
Radojevic, B. B.; Cukavac, M.; Jovanovic, D.
In general, main aim of our paper is to follow influence of neutron's radiation on materials, but one of possible applications of fast neutrons in therapeutical reasons i.e. their influence on carcinom cells of difficuilt geometries in human bodies too. Interactions between neutrons and human cells of tissue are analysed here. We know that the light nuclei of hydrogen, nitrogen, carbon, and oxygen are main constituents of human cells, and that different nuclear models are usually used to present interactions of nuclear particles with mentioned elements. Some of most widely used pre-equilibrium nuclear models are: intranuclear cascade model (ICN), Harp-Miller-Berne (HMB), geometry-dependent hybrid (GDH) and exciton models (EM). In this paper is studied and calculated the primary energetic spectra of the secundary particles (neutrons, protons, and gamas) emitted from this interactions, and followed by corresponding integral cross sections, based on exciton model (EM). The total emission cross-section is the sum of emissions in all stages of energies. Obtained spectra for interactions type of (n, n'), (n, p), and (n, ?), for various incident neutron energies in the interval from 3 MeV up to 30 MeV are analysed too. Some results of calculations are presented here.
Implementation and validation of an implant-based coordinate system for RSA migration calculation.
Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J
2009-10-16
An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.
NASA Astrophysics Data System (ADS)
Nguyen, Son Chi; Vilster Hansen, Bjarke Knud; Hoffmann, Søren Vrønning; Spanget-Larsen, Jens
2008-09-01
The electronic transitions of emodin (1,3,8-trihydroxy-6-methyl-9,10-anthraquinone, E) and its conjugate base (3-oxido-6-methyl-1,8-dihydroxy-9,10-anthraquinone, Ecb) were investigated by UV-Vis linear dichroism (LD) spectroscopy on molecular samples aligned in stretched poly(vinylalcohol). The experiments in the UV region were performed with synchrotron radiation, thereby obtaining significantly improved signal to noise ratio compared with traditional technology. The LD spectra provided information on the polarization directions of the observed transitions, thereby leading to resolution of otherwise overlapping, differently polarized transitions. The investigation was supported by PCM-TD-DFT calculations; a mixed discrete/continuum solvation model was applied in the case of the strongly solvated Ecb anion. The calculations led to excellent agreement with the observed transitions, resulting in the assignment of at least seven excited electronic states in the region 15,000-50,000 cm -1 for each species. A recent assignment of the absorption spectrum of E to a superposition of contributions from 9,10- and 1,10-anthraquinoid tautomeric forms was not supported by the results of the present investigation.
NASA Astrophysics Data System (ADS)
Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang
2017-08-01
Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com; Afnimar,; Puspito, Nanang T.
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994more » Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.« less
NASA Astrophysics Data System (ADS)
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
NASA Astrophysics Data System (ADS)
Miro, M.; Famiglietti, J. S.
2016-12-01
In California, traditional water management has focused heavily on surface water, leaving many basins in a state of critical overdraft and lacking in established frameworks for groundwater management. However, new groundwater legislation, the 2014 Sustainable Groundwater Management Act (SGMA), presents an important opportunity for water managers and hydrologists to develop novel methods for managing statewide groundwater resources. Integrating scientific advances in groundwater monitoring with hydrologically-sound methods can go a long way in creating a system that can better govern the resource. SGMA mandates that groundwater management agencies employ the concept of sustainable yield as their primary management goal but does not clearly define a method to calculate it. This study will develop a hydrologically-based method to quantify sustainable yield that follows the threshold framework under SGMA. Using this method, sustainable yield will be calculated for two critically-overdrafted groundwater basins in California's Central Valley. This measure will also utilize groundwater monitoring data and downscaled remote sensing estimates of groundwater storage change from NASA's GRACE satellite to illustrate why data matters for successful management. This method can be used as a basis for the development of SGMA's groundwater management plans (GSPs) throughout California.
NASA Astrophysics Data System (ADS)
Dai, Mengyan; Liu, Jianghai; Cui, Jianlin; Chen, Chunsheng; Jia, Peng
2017-10-01
In order to solve the problem of the quantitative test of spectrum and color of aerosol, the measurement method of spectrum of aerosol based on human visual system was proposed. The spectrum characteristics and color parameters of three different aerosols were tested, and the color differences were calculated according to the CIE1976-L*a*b* color difference formula. Three tested powders (No 1# No 2# and No 3# ) were dispersed in a plexglass box and turned into aerosol. The powder sample was released by an injector with different dosages in each experiment. The spectrum and color of aerosol were measured by the PRO 6500 Fiber Optic Spectrometer. The experimental results showed that the extinction performance of aerosol became stronger and stronger with the increase of concentration of aerosol. While the chromaticity value differences of aerosols in the experiment were so small, luminance was verified to be the main influence factor of human eye visual perception and contributed most in the three factors of the color difference calculation. The extinction effect of No 3# aerosol was the strongest of all and caused the biggest change of luminance and color difference which would arouse the strongest human visual perception. According to the sensation level of chromatic color by Chinese, recognition color difference would be produced when the dosage of No 1# powder was more than 0.10 gram, the dosage of No 2# powder was more than 0.15 gram, and the dosage of No 3# powder was more than 0.05 gram.
A project based on multi-configuration Dirac-Fock calculations for plasma spectroscopy
NASA Astrophysics Data System (ADS)
Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.
2017-09-01
We present a project dedicated to hot plasma spectroscopy based on a Multi-Configuration Dirac-Fock (MCDF) code, initially developed by J. Bruneau. The code is briefly described and the use of the transition state method for plasma spectroscopy is detailed. Then an opacity code for local-thermodynamic-equilibrium plasmas using MCDF data, named OPAMCDF, is presented. Transition arrays for which the number of lines is too large to be handled in a Detailed Line Accounting (DLA) calculation can be modeled within the Partially Resolved Transition Array method or using the Unresolved Transition Arrays formalism in jj-coupling. An improvement of the original Partially Resolved Transition Array method is presented which gives a better agreement with DLA computations. Comparisons with some absorption and emission experimental spectra are shown. Finally, the capability of the MCDF code to compute atomic data required for collisional-radiative modeling of plasma at non local thermodynamic equilibrium is illustrated. In addition to photoexcitation, this code can be used to calculate photoionization, electron impact excitation and ionization cross-sections as well as autoionization rates in the Distorted-Wave or Close Coupling approximations. Comparisons with cross-sections and rates available in the literature are discussed.
Expanding CyberShake Physics-Based Seismic Hazard Calculations to Central California
NASA Astrophysics Data System (ADS)
Silva, F.; Callaghan, S.; Maechling, P. J.; Goulet, C. A.; Milner, K. R.; Graves, R. W.; Olsen, K. B.; Jordan, T. H.
2016-12-01
As part of its program of earthquake system science, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by first simulating a tensor-valued wavefield of Strain Green Tensors. CyberShake then takes an earthquake rupture forecast and extends it by varying the hypocenter location and slip distribution, resulting in about 500,000 rupture variations. Seismic reciprocity is used to calculate synthetic seismograms for each rupture variation at each computation site. These seismograms are processed to obtain intensity measures, such as spectral acceleration, which are then combined with probabilities from the earthquake rupture forecast to produce a hazard curve. Hazard curves are calculated at seismic frequencies up to 1 Hz for hundreds of sites in a region and the results interpolated to obtain a hazard map. In developing and verifying CyberShake, we have focused our modeling in the greater Los Angeles region. We are now expanding the hazard calculations into Central California. Using workflow tools running jobs across two large-scale open-science supercomputers, NCSA Blue Waters and OLCF Titan, we calculated 1-Hz PSHA results for over 400 locations in Central California. For each location, we produced hazard curves using both a 3D central California velocity model created via tomographic inversion, and a regionally averaged 1D model. These new results provide low-frequency exceedance probabilities for the rapidly expanding metropolitan areas of Santa Barbara, Bakersfield, and San Luis Obispo, and lend new insights into the effects of directivity-basin coupling associated with basins juxtaposed to major faults such as the San Andreas. Particularly interesting are the basin effects associated with the deep sediments of the southern San Joaquin Valley. We will compare hazard
SU-C-204-03: DFT Calculations of the Stability of DOTA-Based-Radiopharmaceuticals
Khabibullin, A.R.; Woods, L.M.; Karolak, A.
2016-06-15
Purpose: Application of the density function theory (DFT) to investigate the structural stability of complexes applied in cancer therapy consisting of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelated to Ac225, Fr221, At217, Bi213, and Gd68 radio-nuclei. Methods: The possibility to deliver a toxic payload directly to tumor cells is a highly desirable aim in targeted alpha particle therapy. The estimation of bond stability between radioactive atoms and the DOTA chelating agent is the key element in understanding the foundations of this delivery process. Thus, we adapted the Vienna Ab-initio Simulation Package (VASP) with the projector-augmented wave method and a plane-wave basis setmore » in order to study the stability and electronic properties of DOTA ligand chelated to radioactive isotopes. In order to count for the relativistic effect of radioactive isotopes we included Spin-Orbit Coupling (SOC) in the DFT calculations. Five DOTA complex structures were represented as unit cells, each containing 58 atoms. The energy optimization was performed for all structures prior to calculations of electronic properties. Binding energies, electron localization functions as well as bond lengths between atoms were estimated. Results: Calculated binding energies for DOTA-radioactive atom systems were −17.792, −5.784, −8.872, −13.305, −18.467 eV for Ac, Fr, At, Bi and Gd complexes respectively. The displacements of isotopes in DOTA cages were estimated from the variations in bond lengths, which were within 2.32–3.75 angstroms. The detailed representation of chemical bonding in all complexes was obtained with the Electron Localization Function (ELF). Conclusion: DOTA-Gd, DOTA-Ac and DOTA-Bi were the most stable structures in the group. Inclusion of SOC had a significant role in the improvement of DFT calculation accuracy for heavy radioactive atoms. Our approach is found to be proper for the investigation of structures with DOTA-based
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
NASA Astrophysics Data System (ADS)
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Seismic low-frequency-based calculation of reservoir fluid mobility and its applications
NASA Astrophysics Data System (ADS)
Chen, Xue-Hua; He, Zhen-Hua; Zhu, Si-Xin; Liu, Wei; Zhong, Wen-Li
2012-06-01
Low frequency content of seismic signals contains information related to the reservoir fluid mobility. Based on the asymptotic analysis theory of frequency-dependent reflectivity from a fluid-saturated poroelastic medium, we derive the computational implementation of reservoir fluid mobility and present the determination of optimal frequency in the implementation. We then calculate the reservoir fluid mobility using the optimal frequency instantaneous spectra at the low-frequency end of the seismic spectrum. The methodology is applied to synthetic seismic data from a permeable gas-bearing reservoir model and real land and marine seismic data. The results demonstrate that the fluid mobility shows excellent quality in imaging the gas reservoirs. It is feasible to detect the location and spatial distribution of gas reservoirs and reduce the non-uniqueness and uncertainty in fluid identification.
Ali, S Tahir; Antonov, Liudmil; Fabian, Walter M F
2014-01-30
Tautomerization energies of a series of isomeric [(4-R-phenyl)azo]naphthols and the analogous Schiff bases (R = N(CH3)2, OCH3, H, CN, NO2) are calculated by LPNO-CEPA/1-CBS using the def2-TZVPP and def2-QZVPP basis sets for extrapolation. The performance of various density functionals (B3LYP, M06-2X, PW6B95, B2PLYP, mPW2PLYP, PWPB95) as well as MP2 and SCS-MP2 is evaluated against these results. M06-2X and SCS-MP2 yield results close to the LPNO-CEPA/1-CBS values. Solvent effects (CCl4, CHCl3, CH3CN, and CH3OH) are treated by a variety of bulk solvation models (SM8, IEFPCM, COSMO, PBF, and SMD) as well as explicit solvation (Monte Carlo free energy perturbation using the OPLSAA force field).
Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk
2004-01-22
Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
NASA Astrophysics Data System (ADS)
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API
A cultural study of a science classroom and graphing calculator-based technology
NASA Astrophysics Data System (ADS)
Casey, Dennis Alan
Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Hartman, Joshua D; Balaji, Ashwin; Beran, Gregory J O
2017-12-12
Fragment-based methods predict nuclear magnetic resonance (NMR) chemical shielding tensors in molecular crystals with high accuracy and computational efficiency. Such methods typically employ electrostatic embedding to mimic the crystalline environment, and the quality of the results can be sensitive to the embedding treatment. To improve the quality of this embedding environment for fragment-based molecular crystal property calculations, we borrow ideas from the embedded ion method to incorporate self-consistently polarized Madelung field effects. The self-consistent reproduction of the Madelung potential (SCRMP) model developed here constructs an array of point charges that incorporates self-consistent lattice polarization and which reproduces the Madelung potential at all atomic sites involved in the quantum mechanical region of the system. The performance of fragment- and cluster-based 1 H, 13 C, 14 N, and 17 O chemical shift predictions using SCRMP and density functionals like PBE and PBE0 are assessed. The improved embedding model results in substantial improvements in the predicted 17 O chemical shifts and modest improvements in the 15 N ones. Finally, the performance of the model is demonstrated by examining the assignment of the two oxygen chemical shifts in the challenging γ-polymorph of glycine. Overall, the SCRMP-embedded NMR chemical shift predictions are on par with or more accurate than those obtained with the widely used gauge-including projector augmented wave (GIPAW) model.
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms.
Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick
2014-10-16
The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
Traumatic Brain Injury History is Associated with Earlier Age of Onset of Alzheimer Disease
LoBue, Christian; Wadsworth, Hannah; Wilmoth, Kristin; Clem, Matthew; Hart, John; Womack, Kyle B.; Didehbani, Nyaz; Lacritz, Laura H.; Rossetti, Heidi C.; Cullum, C. Munro
2016-01-01
Objective This study examined whether a history of traumatic brain injury (TBI) is associated with earlier onset of Alzheimer disease (AD), independent of apolipoprotein ε4 status (Apoe4) and gender. Method Participants with a clinical diagnosis of AD (n=7625) were obtained from the National Alzheimer’s Coordinating Center Uniform Data Set, and categorized based on self-reported lifetime TBI with loss of consciousness (LOC) (TBI+ vs TBI-) and presence of Apoe4. ANCOVAs, controlling for gender, race, and education were used to examine the association between history of TBI, presence of Apoe4, and an interaction of both risk factors on estimated age of AD onset. Results Estimated AD onset differed by TBI history and Apoe4 independently (p’s <.001). The TBI+ group had a mean age of onset 2.5 years earlier than the TBI- group. Likewise, Apoe4 carriers had a mean age of onset 2.3 years earlier than non-carriers. While the interaction was non-significant (p = .34), participants having both a history of TBI and Apoe4 had the earliest mean age of onset compared to those with a TBI history or Apoe4 alone (MDifference = 2.8 & 2.7 years, respectively). These results remained unchanged when stratified by gender. Conclusions History of self-reported TBI can be associated with an earlier onset of AD-related cognitive decline, regardless of Apoe4 status and gender. TBI may be related to an underlying neurodegenerative process in AD, but the implications of age at time of injury, severity, and repetitive injuries remain unclear. PMID:27855547
Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.
Stilp, Christian E; Assgari, Ashley A
2018-02-28
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur
A suggested periodic table up to Z≤ 172, based on Dirac-Fock calculations on atoms and ions.
Pyykkö, Pekka
2011-01-07
Extended Average Level (EAL) Dirac-Fock calculations on atoms and ions agree with earlier work in that a rough shell-filling order for the elements 119-172 is 8s < 5g≤ 8p(1/2) < 6f < 7d < 9s < 9p(1/2) < 8p(3/2). The present Periodic Table develops further that of Fricke, Greiner and Waber [Theor. Chim. Acta 1971, 21, 235] by formally assigning the elements 121-164 to (nlj) slots on the basis of the electron configurations of their ions. Simple estimates are made for likely maximum oxidation states, i, of these elements M in their MX(i) compounds, such as i = 6 for UF(6). Particularly high i are predicted for the 6f elements.
Ionescu, Crina-Maria; Sehnal, David; Falginella, Francesco L; Pant, Purbaj; Pravda, Lukáš; Bouchal, Tomáš; Svobodová Vařeková, Radka; Geidl, Stanislav; Koča, Jaroslav
2015-01-01
Partial atomic charges are a well-established concept, useful in understanding and modeling the chemical behavior of molecules, from simple compounds, to large biomolecular complexes with many reactive sites. This paper introduces AtomicChargeCalculator (ACC), a web-based application for the calculation and analysis of atomic charges which respond to changes in molecular conformation and chemical environment. ACC relies on an empirical method to rapidly compute atomic charges with accuracy comparable to quantum mechanical approaches. Due to its efficient implementation, ACC can handle any type of molecular system, regardless of size and chemical complexity, from drug-like molecules to biomacromolecular complexes with hundreds of thousands of atoms. ACC writes out atomic charges into common molecular structure files, and offers interactive facilities for statistical analysis and comparison of the results, in both tabular and graphical form. Due to high customizability and speed, easy streamlining and the unified platform for calculation and analysis, ACC caters to all fields of life sciences, from drug design to nanocarriers. ACC is freely available via the Internet at http://ncbr.muni.cz/ACC.
New evidence: data documenting parental support for earlier sexuality education.
Barr, Elissa M; Moore, Michele J; Johnson, Tammie; Forrest, Jamie; Jordan, Melissa
2014-01-01
Numerous studies document support for sexuality education to be taught in high school, and often, in middle school. However, little research has been conducted addressing support for sexuality education in elementary schools. As part of the state Behavioral Risk Factor Surveillance System (BRFSS) Survey administration, the Florida Department of Health conducted the Florida Child Health Survey (FCHS) by calling back parents who had children in their home and who agreed to participate (N = 1715). Most parents supported the following sexuality education topics being taught specifically in elementary school: communication skills (89%), human anatomy/reproductive information (65%), abstinence (61%), human immunodeficiency virus (HIV)/sexually transmitted infections (STIs) (53%), and gender/sexual orientation issues (52%). Support was even greater in middle school (62-91%) and high school (72-91%) for these topics and for birth control and condom education. Most parents supported comprehensive sexuality education (40.4%), followed by abstinence-plus (36.4%) and abstinence-only (23.2%). Chi-square results showed significant differences in the type of sexuality education supported by almost all parent demographic variables analyzed including sex, race, marital status, and education. Results add substantial support for age-appropriate school-based sexuality education starting at the elementary school level, the new National Sexuality Education Standards, and funding to support evidence-based abstinence-plus or comprehensive sexuality education. © 2013, American School Health Association.
Calculation of the figure of merit for carbon nanotubes based devices
NASA Astrophysics Data System (ADS)
Vaseashta, Ashok
2004-03-01
single electron transistors, Luttinger-liquid behavior, the Aharonov Bohm effect, and Fabry-Perot interference effects. Hence it is evident that CNT can be used for a variety of applications. To use CNT based devices, it is critical to know the relative advantage of using CNTs over other known electronic materials. The figure of merit for CNT based devices is not reported so far. It is the objective of this investigation to calculate the figure of merit and present such results. Such calculations will enable researchers to focus their research for specific device designs where CNT based devices show a marked improvement over conventional semiconductor devices.
Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.
Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V
2017-10-23
Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.
Using 3d Bim Model for the Value-Based Land Share Calculations
NASA Astrophysics Data System (ADS)
Çelik Şimşek, N.; Uzun, B.
2017-11-01
According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.
A Physical Based Formula for Calculating the Critical Stress of Snow Movement
NASA Astrophysics Data System (ADS)
He, S.; Ohara, N.
2016-12-01
In snow redistribution modeling, one of the most important parameters is the critical stress of snow movement, which is difficult to estimate from field data because it is influenced by various factors. In this study, a new formula for calculating critical stress of snow movement was derived based on the ice particle sintering process modeling and the moment balance of a snow particle. Through this formula, the influences of snow particle size, air temperature, and deposited time on the critical stress were explicitly taken into consideration. It was found that some of the model parameters were sensitive to the critical stress estimation through the sensitivity analysis using Sobol's method. The two sensitive parameters of the sintering process modeling were determined by a calibration-validation procedure using the observed snow flux data via FlowCapt. Based on the snow flux and metrological data observed at the ISAW stations (http://www.iav.ch), it was shown that the results of this formula were able to describe very well the evolution of the minimum friction wind speed required for the snow motion. This new formula suggested that when the snow just reaches the surface, the smaller snowflake can move easier than the larger particles. However, smaller snow particles require more force to move as the sintering between the snowflakes progresses. This implied that compact snow with small snow particles may be harder to erode by wind although smaller particles may have a higher chance to be suspended once they take off.
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao
2016-01-01
The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.
Assimilating the Future for Better Forecasts and Earlier Warnings
NASA Astrophysics Data System (ADS)
Du, H.; Wheatcroft, E.; Smith, L. A.
2016-12-01
Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.
Quantification of residual dose estimation error on log file-based patient dose calculation.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2016-05-01
The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.
Yuan, Y; Duan, J; Popple, R; Brezovich, I
2012-06-01
To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom andmore » our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
GPAW - massively parallel electronic structure calculations with Python-based software.
Enkovaara, J.; Romero, N.; Shende, S.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used thismore » approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.« less
Do theoretical calculations really predict nodes in Fe-based superconductors?
NASA Astrophysics Data System (ADS)
Mazin, Igor
2011-03-01
It is well established that calculations based on the LDA band structure and the Hubbard model, with the parameters U ~ 1.3 - 1.6 eV, and J ~ 0.2 - 0.3 J (a ``UJ'' model), yield strongly anisotropic, and sometimes nodal gaps. The physical origin of this effect is well understood: the two leading terms in the model are ∑Uni ↑ni ↓ and ∑ ' Uninj . The former ensures that the coupling to spin fluctuations proceeds only through the like orbitals, and the latter, not being renormalized by the standard Tolmachev-Morel-Anderson logarithm, tends to equalize the positive and the negative order parameters. Both these features are suspect on a general physics basis: the leading magnetic interaction in itinerant systems is the Hund-rule coupling, which couples every orbital with all the others, and the pnictides, with the order parameter less than 20 meV, should have nearly as strong renormalization of the Coulomb pseudopotential as the conventional superconductors. I will argue that, instead of the UJ model, in pnictides one should use the ``I'' model, derived from the density functional theory (which is supposed to describe the static susceptibility on the mean field level very accurately). The ``I'' here is simply the Stoner factor, the second variation of the LSDA magnetic energy. Unfortunately, this approach is very unlikely to produce gap nodes as easily as the UJ model, indicating that one has to look elsewhere for the nodes origin.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.
Bandura, A V; Evarestov, R A; Lukyanov, S I
2014-07-28
A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC).
Bastien, C; Wellings, R; Burnett, B
2018-06-07
Pedestrian accident reconstruction is necessary to establish cause of death, i.e. establishing vehicle collision speed as well as circumstances leading to the pedestrian being impacted and determining culpability of those involved for subsequent court enquiry. Understanding the complexity of the pedestrian attitude during an accident investigation is necessary to ascertain the causes leading to the tragedy. A generic new method, named Pedestrian Crossing Speed Calculator (PCSC), based on vector algebra, is proposed to compute the pedestrian crossing speed at the moment of impact. PCSC uses vehicle damage and pedestrian anthropometric dimensions to establish a combination of head projection angles against the windscreen; this angle is then compared against the combined velocities angle created from the vehicle and the pedestrian crossing speed at the time of impact. This method has been verified using one accident fatality case in which the exact vehicle and pedestrian crossing speeds were known from Police forensic video analysis. PCSC was then applied on two other accident scenarios and correctly corroborated with the witness statements regarding the pedestrians crossing behaviours. The implications of PCSC could be significant once fully validated against further future accident data, as this method is reversible, allowing the computation of vehicle impact velocity from pedestrian crossing speed as well as verifying witness accounts. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nagura, Takuya; Kawachi, Shingo; Chokawa, Kenta; Shirakawa, Hiroki; Araidai, Masaaki; Kageshima, Hiroyuki; Endoh, Tetsuo; Shiraishi, Kenji
2018-04-01
It is expected that the off-state leakage current of MOSFETs can be reduced by employing vertical body channel MOSFETs (V-MOSFETs). However, in fabricating these devices, the structure of the Si pillars sometimes cannot be maintained during oxidation, since Si atoms sometimes disappear from the Si/oxide interface (Si missing). Thus, in this study, we used first-principles calculations based on the density functional theory, and investigated the Si emission behavior at the various interfaces on the basis of the Si emission model including its atomistic structure and dependence on Si crystal orientation. The results show that the order in which Si atoms are more likely to be emitted during thermal oxidation is (111) > (110) > (310) > (100). Moreover, the emission of Si atoms is enhanced as the compressive strain increases. Therefore, the emission of Si atoms occurs more easily in V-MOSFETs than in planar MOSFETs. To reduce Si missing in V-MOSFETs, oxidation processes that induce less strain, such as wet or pyrogenic oxidation, are necessary.
First principles calculation of elastic and magnetic properties of Cr-based full-Heusler alloys
NASA Astrophysics Data System (ADS)
Aly, Samy H.; Shabara, Reham M.
2014-06-01
We present an ab-initio study of the elastic and magnetic properties of Cr-based full-Heusler alloys within the first-principles density functional theory. The lattice constant, magnetic moment, bulk modulus and density of states are calculated using the full-potential nonorthogonal local-orbital minimum basis (FPLO) code in the Generalized Gradient Approximation (GGA) scheme. Only the two alloys Co2CrSi and Fe2CrSi are half-metallic with energy gaps of 0.88 and 0.55 eV in the spin-down channel respectively. We have predicted the metallicity state for Fe2CrSb, Ni2CrIn, Cu2CrIn, and Cu2CrSi alloys. Fe2CrSb shows a strong pressure dependent, e.g. exhibits metallicity at zero pressure and turns into a half-metal at P≥10 GPa. The total and partial magnetic moments of these alloys were studied under higher pressure, e.g. in Co2CrIn, the total magnetic moment is almost unchanged under higher pressure up to 500 GPa.
Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.
Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark
2013-05-21
Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).
Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M
2007-02-21
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
NASA Astrophysics Data System (ADS)
Dai, Wen-Wu; Zhao, Zong-Yan
2017-06-01
Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C3N4) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C3N4 and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and promote the separation of photo-generated carriers, which provide useful hints for the applications in photocatalysis.
Earlier Parental Set Bedtimes as a Protective Factor Against Depression and Suicidal Ideation
Gangwisch, James E.; Babiss, Lindsay A.; Malaspina, Dolores; Turner, J. Blake; Zammit, Gary K.; Posner, Kelly
2010-01-01
Study Objectives: To examine the relationships between parental set bedtimes, sleep duration, and depression as a quasi-experiment to explore the potentially bidirectional relationship between short sleep duration and depression. Short sleep duration has been shown to precede depression, but this could be explained as a prodromal symptom of depression. Depression in an adolescent can affect his/her chosen bedtime, but it is less likely to affect a parent's chosen set bedtime which can establish a relatively stable upper limit that can directly affect sleep duration. Design: Multivariate cross-sectional analyses of the ADD Health using logistic regression. Setting: United States nationally representative, school-based, probability-based sample in 1994-96. Participants: Adolescents (n = 15,659) in grades 7 to 12. Measurements and Results: Adolescents with parental set bedtimes of midnight or later were 24% more likely to suffer from depression (OR = 1.24, 95% CI 1.04-1.49) and 20% more likely to have suicidal ideation (1.20, 1.01-1.41) than adolescents with parental set bedtimes of 10:00 PM or earlier, after controlling for covariates. Consistent with sleep duration and perception of getting enough sleep acting as mediators, the inclusion of these variables in the multivariate models appreciably attenuated the associations for depression (1.07, 0.88-1.30) and suicidal ideation (1.09, 0.92-1.29). Conclusions: The results from this study provide new evidence to strengthen the argument that short sleep duration could play a role in the etiology of depression. Earlier parental set bedtimes could therefore be protective against adolescent depression and suicidal ideation by lengthening sleep duration. Citation: Gangwisch JE; Babiss LA; Malaspina D; Turner JB; Zammit GK; Posner K. Earlier parental set bedtimes as a protective factor against depression and suicidal ideation. SLEEP 2010;33(1):97-106. PMID:20120626
Digital Game-Based Learning: A Supplement for Medication Calculation Drills in Nurse Education
ERIC Educational Resources Information Center
Foss, Brynjar; Lokken, Atle; Leland, Arne; Stordalen, Jorn; Mordt, Petter; Oftedal, Bjorg F.
2014-01-01
Student nurses, globally, appear to struggle with medication calculations. In order to improve these skills among student nurses, the authors developed The Medication Game--an online computer game that aims to provide simple mathematical and medical calculation drills, and help students practise standard medical units and expressions. The aim of…
Nonlinear optimization method of ship floating condition calculation in wave based on vector
NASA Astrophysics Data System (ADS)
Ding, Ning; Yu, Jian-xing
2014-08-01
Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.
A Contemporary Prostate Biopsy Risk Calculator Based on Multiple Heterogeneous Cohorts.
Ankerst, Donna P; Straubinger, Johanna; Selig, Katharina; Guerrios, Lourdes; De Hoedt, Amanda; Hernandez, Javier; Liss, Michael A; Leach, Robin J; Freedland, Stephen J; Kattan, Michael W; Nam, Robert; Haese, Alexander; Montorsi, Francesco; Boorjian, Stephen A; Cooperberg, Matthew R; Poyet, Cedric; Vertosick, Emily; Vickers, Andrew J
2018-05-16
Prostate cancer prediction tools provide quantitative guidance for doctor-patient decision-making regarding biopsy. The widely used online Prostate Cancer Prevention Trial Risk Calculator (PCPTRC) utilized data from the 1990s based on six-core biopsies and outdated grading systems. We prospectively gathered data from men undergoing prostate biopsy in multiple diverse North American and European institutions participating in the Prostate Biopsy Collaborative Group (PBCG) in order to build a state-of-the-art risk prediction tool. We obtained data from 15 611 men undergoing 16 369 prostate biopsies during 2006-2017 at eight North American institutions for model-building and three European institutions for validation. We used multinomial logistic regression to estimate the risks of high-grade prostate cancer (Gleason score ≥7) on biopsy based on clinical characteristics, including age, prostate-specific antigen, digital rectal exam, African ancestry, first-degree family history, and prior negative biopsy. We compared the PBCG model to the PCPTRC using internal cross-validation and external validation on the European cohorts. Cross-validation on the North American cohorts (5992 biopsies) yielded the PBCG model area under the receiver operating characteristic curve (AUC) as 75.5% (95% confidence interval: 74.2-76.8), a small improvement over the AUC of 72.3% (70.9-73.7) for the PCPTRC (p<0.0001). However, calibration and clinical net benefit were far superior for the PBCG model. Using a risk threshold of 10%, clinical use of the PBCG model would lead to the equivalent of 25 fewer biopsies per 1000 patients without missing any high-grade cancers. Results were similar on external validation on 10 377 European biopsies. The PBCG model should be used in place of the PCPTRC for prediction of prostate biopsy outcome. A contemporary risk tool for outcomes on prostate biopsy based on the routine clinical risk factors is now available for informed decision-making. Copyright
NASA Astrophysics Data System (ADS)
Bidwell, Colin S.
2015-05-01
A method for calculating particle transport through turbo-machinery using the mixing plane analogy was developed and used to analyze the energy efficient engine . This method allows the prediction of temperature and phase change of water based particles along their path and the impingement efficiency and particle impact property data on various components in the engine. This methodology was incorporated into the LEWICE3D V3.5 software. The method was used to predict particle transport in the low pressure compressor of the . The was developed by NASA and GE in the early 1980s as a technology demonstrator and is representative of a modern high bypass turbofan engine. The flow field was calculated using the NASA Glenn ADPAC turbo-machinery flow solver. Computations were performed for a Mach 0.8 cruise condition at 11,887 m assuming a standard warm day for ice particle sizes of 5, 20 and 100 microns and a free stream particle concentration of . The impingement efficiency results showed that as particle size increased average impingement efficiencies and scoop factors increased for the various components. The particle analysis also showed that the amount of mass entering the inner core decreased with increased particle size because the larger particles were less able to negotiate the turn into the inner core due to particle inertia. The particle phase change analysis results showed that the larger particles warmed less as they were transported through the low pressure compressor. Only the smallest 5 micron particles were warmed enough to produce melting with a maximum average melting fraction of 0.18. The results also showed an appreciable amount of particle sublimation and evaporation for the 5 micron particles entering the engine core (22.6 %).
Pocket calculator for local fire-danger ratings
Richard J. Barney; William C. Fischer
1967-01-01
In 1964, Stockstad and Barney published tables that provided conversion factors for calculating local fire danger in the Intermountain area according to fuel types, locations, steepness of terrain, aspects, and times of day. These tables were based on the National Fire-Danger Rating System published earlier that year. This system was adopted for operational use in...
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
Effectiveness of a computer based medication calculation education and testing programme for nurses.
Sherriff, Karen; Burston, Sarah; Wallis, Marianne
2012-01-01
The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Earlier Mother's Age at Menarche Predicts Rapid Infancy Growth and Childhood Obesity
Ong, Ken K; Northstone, Kate; Wells, Jonathan CK; Rubin, Carol; Ness, Andy R; Golding, Jean; Dunger, David B
2007-01-01
Background Early menarche tends to be preceded by rapid infancy weight gain and is associated with increased childhood and adult obesity risk. As age at menarche is a heritable trait, we hypothesised that age at menarche in the mother may in turn predict her children's early growth and obesity risk. Methods and Findings We tested associations between mother's age at menarche, mother's adult body size and obesity risk, and her children's growth and obesity risk in 6,009 children from the UK population-based Avon Longitudinal Study of Parents and Children (ALSPAC) birth cohort who had growth and fat mass at age 9 y measured by dual-energy X-ray absorptiometry. A subgroup of 914 children also had detailed infancy and childhood growth data. In the mothers, earlier menarche was associated with shorter adult height (by 0.64 cm/y), increased weight (0.92 kg/y), and body mass index (BMI, 0.51 kg/m2/y; all p < 0.001). In contrast, in her children, earlier mother's menarche predicted taller height at 9 y (by 0.41 cm/y) and greater weight (0.80 kg/y), BMI (0.29 kg/m2/y), and fat mass index (0.22 kg/m2/year; all p < 0.001). Children in the earliest mother's menarche quintile (≤11 y) were more obese than the oldest quintile (≥15 y) (OR, 2.15, 95% CI 1.46 to 3.17; p < 0.001, adjusted for mother's education and BMI). In the subgroup, children in the earliest quintile showed faster gains in weight (p < 0.001) and height (p < 0.001) only from birth to 2 y, but not from 2 to 9 y (p = 0.3–0.8). Conclusions Earlier age at menarche may be a transgenerational marker of a faster growth tempo, characterised by rapid weight gain and growth, particularly during infancy, and leading to taller childhood stature, but likely earlier maturation and therefore shorter adult stature. This growth pattern confers increased childhood and adult obesity risks. PMID:17455989
Lin, Hai; Zhao, Yan; Tishchenko, Oksana; Truhlar, Donald G
2006-09-01
The multiconfiguration molecular mechanics (MCMM) method is a general algorithm for generating potential energy surfaces for chemical reactions by fitting high-level electronic structure data with the help of molecular mechanical (MM) potentials. It was previously developed as an extension of standard MM to reactive systems by inclusion of multidimensional resonance interactions between MM configurations corresponding to specific valence bonding patterns, with the resonance matrix element obtained from quantum mechanical (QM) electronic structure calculations. In particular, the resonance matrix element is obtained by multidimensional interpolation employing a finite number of geometries at which electronic-structure calculations of the energy, gradient, and Hessian are carried out. In this paper, we present a strategy for combining MCMM with hybrid quantum mechanical molecular mechanical (QM/MM) methods. In the new scheme, electronic-structure information for obtaining the resonance integral is obtained by means of hybrid QM/MM calculations instead of fully QM calculations. As such, the new strategy can be applied to the studies of very large reactive systems. The new MCMM scheme is tested for two hydrogen-transfer reactions. Very encouraging convergence is obtained for rate constants including tunneling, suggesting that the new MCMM method, called QM/MM-MCMM, is a very general, stable, and efficient procedure for generating potential energy surfaces for large reactive systems. The results are found to converge well with respect to the number of Hessians. The results are also compared to calculations in which the resonance integral data are obtained by pure QM, and this illustrates the sensitivity of reaction rate calculations to the treatment of the QM-MM border. For the smaller of the two systems, comparison is also made to direct dynamics calculations in which the potential energies are computed quantum mechanically on the fly.
Wang, L; Lovelock, M; Chui, C S
1999-12-01
To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.
Monte Carlo based electron treatment planning and cutout output factor calculations
NASA Astrophysics Data System (ADS)
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
A new Morse-oscillator based Hamiltonian for H 3+: Calculation of line strengths
NASA Astrophysics Data System (ADS)
Jensen, Per; Špirko, V.
1986-07-01
In two recent publications [V. Špirko, P. Jensen, P. R. Bunker, and A. Čejchan, J. Mol. Spectrosc.112, 183-202 (1985); P. Jensen, V. Špirko, and P. R. Bunker, J. Mol. Spectrosc.115, 269-293 (1986)], we have described the development of Morse oscillator adapted rotation-vibration Hamiltonians for equilateral triangular X3 and Y2X molecules, and we have used these Hamiltonians to calculate the rotation-vibration energies for H 3+ and its X3+ and Y2X+ isotopes from ab initio potential energy functions. The present paper presents a method for calculating rotation-vibration line strengths of H 3+ and its isotopes using an ab initio dipole moment function [G. D. Carney and R. N. Porter, J. Chem. Phys.60, 4251-4264 (1974)] together with the energies and wave-functions obtained by diagonalization of the Morse oscillator adapted Hamiltonians. We use this method for calculating the vibrational transition moments involving the lowest vibrational states of H 3+, D 3+, H 2D +, and D 2H +. Further, we calculate the line strengths of the low- J transitions in the rotational spectra of H 3+ in the vibrational ground state and in the ν1 and ν2 states. We hope that the calculations presented will facilitate the search for further rotation-vibration transitions of H 3+ and its isotopes.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less
Volume calculation of CT lung lesions based on Halton low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Li, Shusheng; Wang, Liansheng; Li, Shuo
2017-03-01
Volume calculation from the Computed Tomography (CT) lung lesions data is a significant parameter for clinical diagnosis. The volume is widely used to assess the severity of the lung nodules and track its progression, however, the accuracy and efficiency of previous studies are not well achieved for clinical uses. It remains to be a challenging task due to its tight attachment to the lung wall, inhomogeneous background noises and large variations in sizes and shape. In this paper, we employ Halton low-discrepancy sequences to calculate the volume of the lung lesions. The proposed method directly compute the volume without the procedure of three-dimension (3D) model reconstruction and surface triangulation, which significantly improves the efficiency and reduces the complexity. The main steps of the proposed method are: (1) generate a certain number of random points in each slice using Halton low-discrepancy sequences and calculate the lesion area of each slice through the proportion; (2) obtain the volume by integrating the areas in the sagittal direction. In order to evaluate our proposed method, the experiments were conducted on the sufficient data sets with different size of lung lesions. With the uniform distribution of random points, our proposed method achieves more accurate results compared with other methods, which demonstrates the robustness and accuracy for the volume calculation of CT lung lesions. In addition, our proposed method is easy to follow and can be extensively applied to other applications, e.g., volume calculation of liver tumor, atrial wall aneurysm, etc.
Weissman, David G; Schriber, Roberta A; Fassbender, Catherine; Atherton, Olivia; Krafft, Cynthia; Robins, Richard W; Hastings, Paul D; Guyer, Amanda E
2015-12-01
Early adolescent onset of substance use is a robust predictor of future substance use disorders. We examined the relation between age of substance use initiation and resting state functional connectivity (RSFC) of the core reward processing (nucleus accumbens; NAcc) to cognitive control (prefrontal cortex; PFC) brain networks. Adolescents in a longitudinal study of Mexican-origin youth reported their substance use annually from ages 10 to 16 years. At age 16, 69 adolescents participated in a resting state functional magnetic resonance imaging scan. Seed-based correlational analyses were conducted using regions of interest in bilateral NAcc. The earlier that adolescents initiated substance use, the stronger the connectivity between bilateral NAcc and right dorsolateral PFC, right dorsomedial PFC, right pre-supplementary motor area, right inferior parietal lobule, and left medial temporal gyrus. The regions that demonstrated significant positive linear relationships between the number of adolescent years using substances and connectivity with NAcc are nodes in the right frontoparietal network, which is central to cognitive control. The coupling of reward and cognitive control networks may be a mechanism through which earlier onset of substance use is related to brain function over time, a trajectory that may be implicated in subsequent substance use disorders. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Economic Costs Avoided by Diagnosing Melanoma Six Months Earlier Justify >100 Benign Biopsies.
Aires, Daniel J; Wick, Jo; Shaath, Tarek S; Rajpara, Anand N; Patel, Vikas; Badawi, Ahmed H; Li, Cicy; Fraga, Garth R; Doolittle, Gary; Liu, Deede Y
2016-05-01
New melanoma drugs bring enormous benefits but do so at significant costs. Because melanoma grows deeper and deadlier over time, deeper lesions are costlier due to increased sentinel lymph node biopsy, chemotherapy, and disease-associated income loss. Prior studies have justified pigmented lesion biopsies on a "value per life" basis; by contrast we sought to assess how many biopsies are justified per melanoma found on a purely economic basis. We modeled how melanomas in the United States would behave if diagnosis were delayed by 6 months, eg, not biopsied, only observed until the next surveillance visit. Economic loss from delayed biopsy is the obverse of economic benefit of performing biopsy earlier. Growth rates were based on Liu et al. The results of this study can be applied to all patients presenting to dermatologists with pigmented skin lesions suspicious for melanoma. In-situ melanomas were excluded because no studies to date have modeled growth rates analogous to those for invasive melanoma. We assume conservatively that all melanomas not biopsied initially will be biopsied and treated 6 months later. Major modeled costs are (1) increased sentinel lymph node biopsy, (2) increased chemotherapy for metastatic lesions using increased 5-yr death as metastasis marker, and (3) income loss per melanoma death at $413,370 as previously published. Costs avoided by diagnosing melanoma earlier justify 170 biopsies per melanoma found. Efforts to penalize "unnecessary" biopsies may be economically counterproductive.
J Drugs Dermatol. 2016;15(5):527-532.
Identified research directions for using manufacturing knowledge earlier in the product lifecycle
Hedberg, Thomas D.; Hartman, Nathan W.; Rosche, Phil; Fischer, Kevin
2016-01-01
Design for Manufacturing (DFM), especially the use of manufacturing knowledge to support design decisions, has received attention in the academic domain. However, industry practice has not been studied enough to provide solutions that are mature for industry. The current state of the art for DFM is often rule-based functionality within Computer-Aided Design (CAD) systems that enforce specific design requirements. That rule-based functionality may or may not dynamically affect geometry definition. And, if rule-based functionality exists in the CAD system, it is typically a customization on a case-by-case basis. Manufacturing knowledge is a phrase with vast meanings, which may include knowledge on the effects of material properties decisions, machine and process capabilities, or understanding the unintended consequences of design decisions on manufacturing. One of the DFM questions to answer is how can manufacturing knowledge, depending on its definition, be used earlier in the product lifecycle to enable a more collaborative development environment? This paper will discuss the results of a workshop on manufacturing knowledge that highlights several research questions needing more study. This paper proposes recommendations for investigating the relationship of manufacturing knowledge with shape, behavior, and context characteristics of product to produce a better understanding of what knowledge is most important. In addition, the proposal includes recommendations for investigating the system-level barriers to reusing manufacturing knowledge and how model-based manufacturing may ease the burden of knowledge sharing. Lastly, the proposal addresses the direction of future research for holistic solutions of using manufacturing knowledge earlier in the product lifecycle. PMID:27990027
Identified research directions for using manufacturing knowledge earlier in the product lifecycle.
Hedberg, Thomas D; Hartman, Nathan W; Rosche, Phil; Fischer, Kevin
2017-01-01
Design for Manufacturing (DFM), especially the use of manufacturing knowledge to support design decisions, has received attention in the academic domain. However, industry practice has not been studied enough to provide solutions that are mature for industry. The current state of the art for DFM is often rule-based functionality within Computer-Aided Design (CAD) systems that enforce specific design requirements. That rule-based functionality may or may not dynamically affect geometry definition. And, if rule-based functionality exists in the CAD system, it is typically a customization on a case-by-case basis. Manufacturing knowledge is a phrase with vast meanings, which may include knowledge on the effects of material properties decisions, machine and process capabilities, or understanding the unintended consequences of design decisions on manufacturing. One of the DFM questions to answer is how can manufacturing knowledge, depending on its definition, be used earlier in the product lifecycle to enable a more collaborative development environment? This paper will discuss the results of a workshop on manufacturing knowledge that highlights several research questions needing more study. This paper proposes recommendations for investigating the relationship of manufacturing knowledge with shape, behavior, and context characteristics of product to produce a better understanding of what knowledge is most important. In addition, the proposal includes recommendations for investigating the system-level barriers to reusing manufacturing knowledge and how model-based manufacturing may ease the burden of knowledge sharing. Lastly, the proposal addresses the direction of future research for holistic solutions of using manufacturing knowledge earlier in the product lifecycle.
Calculating the costs of work-based training: the case of NHS Cadet Schemes.
Norman, Ian; Normand, Charles; Watson, Roger; Draper, Jan; Jowett, Sandra; Coster, Samantha
2008-09-01
The worldwide shortage of registered nurses [Buchan, J., Calman, L., 2004. The Global Shortage of Registered Nurses: An Overview of Issues And Actions. International Council of Nurses, Geneva] points to the need for initiatives which increase access to the profession, in particular, to those sections of the population who traditionally do not enter nursing. This paper reports findings on the costs associated with one such initiative, the British National Health Service (NHS) Cadet Scheme, designed to provide a mechanism for entry into nurse training for young people without conventional academic qualifications. The paper illustrates an approach to costing work-based learning interventions which offsets the value attributed to trainees' work against their training costs. To provide a preliminary evaluation of the cost of the NHS Cadet Scheme initiative. Questionnaire survey of the leaders of all cadet schemes in England (n=62, 100% response) in December 2002 to collect financial information and data on progression of cadets through the scheme, and a follow-up questionnaire survey of the same scheme leaders to improve the quality of information, which was completed in January 2004 (n=56, 59% response). The mean cost of producing a cadet to progress successfully through the scheme and onto a pre-registration nursing programme depends substantially on the value of their contribution to healthcare work during training and the progression rate of students through the scheme. The findings from this evaluation suggest that these factors varied very widely across the 62 schemes. Established schemes have, on average, lower attrition and higher progression rates than more recently established schemes. Using these rates, we estimate that on maturity, a cadet scheme will progress approximately 60% of students into pre-registration nurse training. As comparative information was not available from similar initiatives that provide access to nurse training, it was not possible to
Rice, P; O'Brien, D; Shalloo, L; Holden, N M
2017-11-01
A major methodological issue for life cycle assessment, commonly used to quantify greenhouse gas emissions from livestock systems, is allocation from multifunctional processes. When a process produces more than one output, the environmental burden has to be assigned between the outputs, such as milk and meat from a dairy cow. In the absence of an objective function for choosing an allocation method, a decision must be made considering a range of factors, one of which is the availability and quality of necessary data. The objective of this study was to evaluate allocation methods to calculate the climate change impact of the economically average (€/ha) dairy farm in Ireland considering both milk and meat outputs, focusing specifically on the pedigree of the available data for each method. The methods were: economic, energy, protein, emergy, mass of liveweight, mass of carcass weight and physical causality. The data quality for each method was expressed using a pedigree score based on reliability of the source, completeness, temporal applicability, geographical alignment and technological appropriateness. Scenario analysis was used to compare the normalised impact per functional unit (FU) from the different allocation methods, between the best and worst third of farms (in economic terms, €/ha) in the national farm survey. For the average farm, the allocation factors for milk ranged from 75% (physical causality) to 89% (mass of carcass weight), which in turn resulted in an impact per FU, from 1.04 to 1.22 kg CO 2 -eq/kg (fat and protein corrected milk). Pedigree scores ranged from 6.0 to 17.1 with protein and economic allocation having the best pedigree. It was concluded that when making the choice of allocation method, the quality of the data available (pedigree) should be given greater emphasis during the decision making process because the effect of allocation on the results. A range of allocation methods could be deployed to understand the uncertainty
Douillard, J M; Henry, M
2003-07-15
A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Optimum design calculations for detectors based on ZnSe(Те,О) scintillators
NASA Astrophysics Data System (ADS)
Katrunov, K.; Ryzhikov, V.; Gavrilyuk, V.; Naydenov, S.; Lysetska, O.; Litichevskyi, V.
2013-06-01
Light collection in scintillators ZnSe(X), where X is an isovalent dopant, was studied using Monte Carlo calculations. Optimum design was determined for detectors of "scintillator—Si-photodiode" type, which can involve either one scintillation element or scintillation layers of large area made of small-crystalline grains. The calculations were carried out both for determination of the optimum scintillator shape and for design optimization of light guides, on the surface of which the layer of small-crystalline grains is formed.
Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi
2017-11-01
While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
Rolfes, Leàn; van Hunsel, Florence; Caster, Ola; Taavola, Henric; Taxis, Katja; van Puijenbroek, Eugène
2018-03-09
To explore if there is a difference between patients and healthcare professionals (HCPs) in time to reporting drug-adverse drug reaction (ADR) associations that led to drug safety signals. This was a retrospective comparison of time to reporting selected drug-ADR associations which led to drug safety signals between patients and HCPs. ADR reports were selected from the World Health Organization Global database of individual case safety reports, VigiBase. Reports were selected based on drug-ADR associations of actual drug safety signals. Primary outcome was the difference in time to reporting between patients and HCPs. The date of the first report for each individual signal was used as time zero. The difference in time between the date of the reports and time zero was calculated. Statistical differences in timing were analysed on the corresponding survival curves using a Mann-Whitney U test. In total, 2822 reports were included, of which 52.7% were patient reports, with a median of 25% for all included signals. For all signals, median time to signal detection was 10.4 years. Overall, HCPs reported earlier than patients: median 7.0 vs. 8.3 years (P < 0.001). Patients contributed a large proportion of reports on drug-ADR pairs that eventually became signals. HCPs reported 1.3 year earlier than patients. These findings strengthen the evidence on the value of patient reporting in signal detection and highlight an opportunity to encourage patients to report suspected ADRs even earlier in the future. © 2018 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.
Vibrational and structural study of onopordopicrin based on the FTIR spectrum and DFT calculations.
Chain, Fernando E; Romano, Elida; Leyton, Patricio; Paipa, Carolina; Catalán, César A N; Fortuna, Mario; Brandán, Silvia Antonia
2015-01-01
In the present work, the structural and vibrational properties of the sesquiterpene lactone onopordopicrin (OP) were studied by using infrared spectroscopy and density functional theory (DFT) calculations together with the 6-31G(∗) basis set. The harmonic vibrational wavenumbers for the optimized geometry were calculated at the same level of theory. The complete assignment of the observed bands in the infrared spectrum was performed by combining the DFT calculations with Pulay's scaled quantum mechanical force field (SQMFF) methodology. The comparison between the theoretical and experimental infrared spectrum demonstrated good agreement. Then, the results were used to predict the Raman spectrum. Additionally, the structural properties of OP, such as atomic charges, bond orders, molecular electrostatic potentials, characteristics of electronic delocalization and topological properties of the electronic charge density were evaluated by natural bond orbital (NBO), atoms in molecules (AIM) and frontier orbitals studies. The calculated energy band gap and the chemical potential (μ), electronegativity (χ), global hardness (η), global softness (S) and global electrophilicity index (ω) descriptors predicted for OP low reactivity, higher stability and lower electrophilicity index as compared with the sesquiterpene lactone cnicin containing similar rings. Copyright © 2015 Elsevier B.V. All rights reserved.
ARS-Media: A spreadsheet tool for calculating media recipes based on ion-specific constraints
USDA-ARS?s Scientific Manuscript database
ARS-Media is an ion solution calculator that uses Microsoft Excel to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Thus, the recipes are generated using ...
Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.
Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf
2013-05-28
Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
Bistoni, Giovanni; Riplinger, Christoph; Minenkov, Yury; Cavallo, Luigi; Auer, Alexander A; Neese, Frank
2017-07-11
The validity of the main approximations used in canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) in standard chemical applications is discussed. In particular, we investigate the dependence of the results on the number of electrons included in the correlation treatment in frozen-core (FC) calculations and on the main threshold governing the accuracy of DLPNO all-electron (AE) calculations. Initially, scalar relativistic orbital energies for the ground state of the atoms from Li to Rn in the periodic table are calculated. An energy criterion is used for determining the orbitals that can be excluded from the correlation treatment in FC coupled cluster calculations without significant loss of accuracy. The heterolytic dissociation energy (HDE) of a series of metal compounds (LiF, NaF, AlF 3 , CaF 2 , CuF, GaF 3 , YF 3 , AgF, InF 3 , HfF 4 , and AuF) is calculated at the canonical CCSD(T) level, and the dependence of the results on the number of correlated electrons is investigated. Although for many of the studied reactions subvalence correlation effects contribute significantly to the HDE, the use of an energy criterion permits a conservative definition of the size of the core, allowing FC calculations to be performed in a black-box fashion while retaining chemical accuracy. A comparison of the CCSD and the DLPNO-CCSD methods in describing the core-core, core-valence, and valence-valence components of the correlation energy is given. It is found that more conservative thresholds must be used for electron pairs containing at least one core electron in order to achieve high accuracy in AE DLPNO-CCSD calculations relative to FC calculations. With the new settings, the DLPNO-CCSD method reproduces canonical CCSD results in both AE and FC calculations with the same accuracy.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR
NASA Technical Reports Server (NTRS)
Cheng, H. K.; Wong, Eric Y.; Dogra, V. K.
1991-01-01
Grad's thirteen-moment equations are applied to the flow behind a bow shock under the formalism of a thin shock layer. Comparison of this version of the theory with Direct Simulation Monte Carlo calculations of flows about a flat plate at finite attack angle has lent support to the approach as a useful extension of the continuum model for studying translational nonequilibrium in the shock layer. This paper reassesses the physical basis and limitations of the development with additional calculations and comparisons. The streamline correlation principle, which allows transformation of the 13-moment based system to one based on the Navier-Stokes equations, is extended to a three-dimensional formulation. The development yields a strip theory for planar lifting surfaces at finite incidences. Examples reveal that the lift-to-drag ratio is little influenced by planform geometry and varies with altitudes according to a 'bridging function' determined by correlated two-dimensional calculations.
NASA Astrophysics Data System (ADS)
Jin, J.; Wang, Y.
2017-12-01
Ecosystem-scale water-use efficiency (EWUE), defined as the ratio of gross primary productivity (GPP) to evapotranspiration (ET), is an important indicator for understanding how water couples with the carbon cycle under global change. Relationships between EWUE and abiotic environmental factors (e.g. climatic factors, atmospheric CO2concentration and nitrogen deposition) have been widely investigated, but the variations in EWUE in response to biotic controls remain little understood. Here, we argue that phenology plays an important role in the regulation of EWUE by analyzing springtime EWUE responses to variability of the GPP-based vegetation activity onset (VAO) in temperate and boreal ecosystems using both satellite and flux-tower observations. Based on MODIS productions during 2000-2014, we found that spring EWUE widely significantly increased with the earlier VAO mainly in the mid- and high latitudes (over 50°N), southwestern China and mid-western North America. When AVO advanced a 10-day, the spring EWUE would increase on average by 0.17±0.09 g C kg-1 H2O in temperate and continental climates after removing the effect of environmental factors. The main response patterns of EWUE to phenology suggest that an increase in spring EWUE with an earlier VAO are mainly because the increase in GPP is relatively larger in magnitude compared to that of ET, or due to an increase in GPP accompanied by a decrease in ET, resulting from an advanced VAO. The credibility of the results is also supported by the local-scale observations. By analyzing 66 site-years of flux and meteorological data obtained from 8 temperate deciduous broadleaf forest sites across North America and Europe, spring EWUE increased 0.42±0.08 g C kg-1 H2O with a 10-day advance of VAO across all sites after controlling for environmental factors, mainly because an earlier VAO could lead to a steeper increase in GPP than in ET. Our results and conclusions highlight that phenological factors cannot be
Earlier predictors of eating disorder symptoms in 9-year-old children. A longitudinal study.
Parkinson, Kathryn N; Drewett, Robert F; Le Couteur, Ann S; Adamson, Ashley J
2012-08-01
The aim of the study was to examine predictors of eating disorder symptoms in a population based sample at the earliest age at which they can be measured using the Children's Eating Attitudes Test. Data were collected from the longitudinal Gateshead Millennium Study cohort; 609 children participated in the 7 year data sweep (and their mothers and teachers), and 589 children participated in the 9 year data sweep. Eating disorder symptoms at 9 years were higher in boys, and in children from more deprived families. Higher eating disorder symptoms were associated with more body dissatisfaction at 9 years. Higher symptoms were predicted by higher levels of dietary restraint and of emotional symptoms, but not greater body dissatisfaction, 2 years earlier. The study showed that some correlates of high eating disorder symptoms found in adolescents and adults are also found in children, before the rise in diagnosable eating disorders over the pubertal period. Copyright © 2012 Elsevier Ltd. All rights reserved.
Navier-Stokes calculations on multi-element airfoils using a chimera-based solver
NASA Technical Reports Server (NTRS)
Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.
1993-01-01
A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.
Code of Federal Regulations, 2010 CFR
2010-10-01
... patient utilization calendar year as identified from Medicare claims is calendar year 2007. (4) Wage index... calculating the per-treatment base rate for 2011 are as follows: (1) Per patient utilization in CY 2007, 2008..., 2008 or 2009 to determine the year with the lowest per patient utilization. (2) Update of per treatment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What do I need to know about the base denomination for redemption value calculations? 351.16 Section 351.16 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
NASA Astrophysics Data System (ADS)
Essa, Mohammed Sh.; Chiad, Bahaa T.; Hussein, Khalil A.
2018-05-01
Chemical thermal deposition techniques are highly depending on deposition platform temperature as well as surface substrate temperatures, so in this research thermal distribution and heat transfer was calculated to optimize the deposition platform temperature distribution, determine the power required for the heating element, to improve thermal homogeneity. Furthermore, calculate the dissipated thermal power from the deposition platform. Moreover, the thermal imager (thermal camera) was used to estimate the thermal destitution in addition to, the temperature allocation over 400cm2 heated plate area. In order to reach a plate temperature at 500 oC, a plate supported with an electrical heater of power (2000 W). Stainless steel plate of 12mm thickness was used as a heated plate and deposition platform and subjected to lab tests using element analyzer X-ray fluorescence system (XRF) to check its elemental composition and found the grade of stainless steel and found to be 316 L. The total heat losses calculated at this temperature was 612 W. Homemade heating element was used to heat the plate and can reach 450 oC with less than 15 min as recorded from the system.as well as the temperatures recorded and monitored using Arduino/UNO microcontroller with cold-junction-compensated K-thermocouple-to-digital converter type MAX6675.
Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate
NASA Astrophysics Data System (ADS)
Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef
2016-04-01
The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.
An investigation of voxel geometries for MCNP-based radiation dose calculations.
Zhang, Juying; Bednarz, Bryan; Xu, X George
2006-11-01
Voxelized geometry such as those obtained from medical images is increasingly used in Monte Carlo calculations of absorbed doses. One useful application of calculated absorbed dose is the determination of fluence-to-dose conversion factors for different organs. However, confusion still exists about how such a geometry is defined and how the energy deposition is best computed, especially involving a popular code, MCNP5. This study investigated two different types of geometries in the MCNP5 code, cell and lattice definitions. A 10 cm x 10 cm x 10 cm test phantom, which contained an embedded 2 cm x 2 cm x 2 cm target at its center, was considered. A planar source emitting parallel photons was also considered in the study. The results revealed that MCNP5 does not calculate total target volume for multi-voxel geometries. Therefore, tallies which involve total target volume must be divided by the user by the total number of voxels to obtain a correct dose result. Also, using planar source areas greater than the phantom size results in the same fluence-to-dose conversion factor.
NASA Astrophysics Data System (ADS)
Ying, Zhang; Zhengqiang, Li; Yan, Wang
2014-03-01
Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.
Earlier parental set bedtimes as a protective factor against depression and suicidal ideation.
Gangwisch, James E; Babiss, Lindsay A; Malaspina, Dolores; Turner, J Blake; Zammit, Gary K; Posner, Kelly
2010-01-01
To examine the relationships between parental set bedtimes, sleep duration, and depression as a quasi-experiment to explore the potentially bidirectional relationship between short sleep duration and depression. Short sleep duration has been shown to precede depression, but this could be explained as a prodromal symptom of depression. Depression in an adolescent can affect his/her chosen bedtime, but it is less likely to affect a parent's chosen set bedtime which can establish a relatively stable upper limit that can directly affect sleep duration. Multivariate cross-sectional analyses of the ADD Health using logistic regression. United States nationally representative, school-based, probability-based sample in 1994-96. Adolescents (n = 15,659) in grades 7 to 12. Adolescents with parental set bedtimes of midnight or later were 24% more likely to suffer from depression (OR = 1.24, 95% CI 1.04-1.49) and 20% more likely to have suicidal ideation (1.20, 1.01-1.41) than adolescents with parental set bedtimes of 10:00 PM or earlier, after controlling for covariates. Consistent with sleep duration and perception of getting enough sleep acting as mediators, the inclusion of these variables in the multivariate models appreciably attenuated the associations for depression (1.07, 0.88-1.30) and suicidal ideation (1.09, 0.92-1.29). The results from this study provide new evidence to strengthen the argument that short sleep duration could play a role in the etiology of depression. Earlier parental set bedtimes could therefore be protective against adolescent depression and suicidal ideation by lengthening sleep duration.
Nolan, Tom; Dack, Charlotte; Pal, Kingshuk; Ross, Jamie; Stevenson, Fiona A; Peacock, Richard; Pearson, Mike; Spiegelhalter, David; Sweeting, Michael; Murray, Elizabeth
2015-03-01
Use of risk calculators for specific diseases is increasing, with an underlying assumption that they promote risk reduction as users become better informed and motivated to take preventive action. Empirical data to support this are, however, sparse and contradictory. To explore user reactions to a cardiovascular risk calculator for people with type 2 diabetes. Objectives were to identify cognitive and emotional reactions to the presentation of risk, with a view to understanding whether and how such a calculator could help motivate users to adopt healthier behaviours and/or improve adherence to medication. Qualitative study combining data from focus groups and individual user experience. Adults with type 2 diabetes were recruited through website advertisements and posters displayed at local GP practices and diabetes groups. Participants used a risk calculator that provided individualised estimates of cardiovascular risk. Estimates were based on UK Prospective Diabetes Study (UKPDS) data, supplemented with data from trials and systematic reviews. Risk information was presented using natural frequencies, visual displays, and a range of formats. Data were recorded and transcribed, then analysed by a multidisciplinary group. Thirty-six participants contributed data. Users demonstrated a range of complex cognitive and emotional responses, which might explain the lack of change in health behaviours demonstrated in the literature. Cardiovascular risk calculators for people with diabetes may best be used in conjunction with health professionals who can guide the user through the calculator and help them use the resulting risk information as a source of motivation and encouragement. © British Journal of General Practice 2015.
NASA Astrophysics Data System (ADS)
Ananthakrishna, G.; K, Srikanth
2018-03-01
It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we
NASA Astrophysics Data System (ADS)
Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok
2018-02-01
A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael
2007-08-21
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental
Wang, Fei; Zhen, Zhao; Liu, Chun
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated
Wang, Fei; Zhen, Zhao; Liu, Chun; ...
2017-12-18
Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated
Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings
NASA Astrophysics Data System (ADS)
Ucun, Fatih; Tokatlı, Ahmet
2015-02-01
In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.
Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems
NASA Astrophysics Data System (ADS)
da Jornada, Felipe H.
2015-03-01
Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.
Liu, Jing-yong; Huang, Shu-jie; Sun, Shui-yu; Ning, Xun-an; He, Rui-zhe; Li, Xiao-ming; Chen, Tao; Luo, Guang-qian; Xie, Wu-ming; Wang, Yu-Jie; Zhuo, Zhong-xu; Fu, Jie-wen
2015-04-01
Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning of Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na2S and Na2SO4) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na2SO4 and Na2S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO4(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO2, CaO, TiO2, and Al2O3 containing materials function as condensed phase solids in the temperature range of 800-1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and the concentration of Si, Ca and Al-containing compounds in the sludge. These findings provide useful information for understanding the partitioning behavior of Pb, facilitating the development of strategies to control the volatilization of Pb during sludge incineration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
A Microsoft Excel® 2010 Based Tool for Calculating Interobserver Agreement
Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel®) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work. PMID:22649578
A microsoft excel(®) 2010 based tool for calculating interobserver agreement.
Reed, Derek D; Azulay, Richard L
2011-01-01
This technical report provides detailed information on the rationale for using a common computer spreadsheet program (Microsoft Excel(®)) to calculate various forms of interobserver agreement for both continuous and discontinuous data sets. In addition, we provide a brief tutorial on how to use an Excel spreadsheet to automatically compute traditional total count, partial agreement-within-intervals, exact agreement, trial-by-trial, interval-by-interval, scored-interval, unscored-interval, total duration, and mean duration-per-interval interobserver agreement algorithms. We conclude with a discussion of how practitioners may integrate this tool into their clinical work.
Calculations of kaonic nuclei based on chiral meson-baryon amplitudes
NASA Astrophysics Data System (ADS)
Gazda, Daniel; Mareš, Jiří
2013-09-01
In-medium KbarN scattering amplitudes developed within a chirally motivated coupled-channel model are used to construct K- nuclear potentials for calculations of K- nuclear quasi-bound states. Self-consistent evaluations yield K- potential depths -Re VK(ρ0) of order 100 MeV. Dynamical polarization effects and two-nucleon KbarNN→YN absorption modes are discussed. The widths ΓK of allK- nuclear quasi-bound states are comparable or even larger than the corresponding binding energies BK, exceeding considerably the energy level spacing.
Liu, Jing-yong, E-mail: www053991@126.com; Huang, Shu-jie; Sun, Shui-yu
2015-04-15
Highlights: • A thermodynamic equilibrium calculation was carried out. • Effects of three types of sulfurs on Pb distribution were investigated. • The mechanism for three types of sulfurs acting on Pb partitioning were proposed. • Lead partitioning and species in bottom ash and fly ash were identified. - Abstract: Experiments in a tubular furnace reactor and thermodynamic equilibrium calculations were conducted to investigate the impact of sulfur compounds on the migration of lead (Pb) during sludge incineration. Representative samples of typical sludge with and without the addition of sulfur compounds were combusted at 850 °C, and the partitioning ofmore » Pb in the solid phase (bottom ash) and gas phase (fly ash and flue gas) was quantified. The results indicate that three types of sulfur compounds (S, Na{sub 2}S and Na{sub 2}SO{sub 4}) added to the sludge could facilitate the volatilization of Pb in the gas phase (fly ash and flue gas) into metal sulfates displacing its sulfides and some of its oxides. The effect of promoting Pb volatilization by adding Na{sub 2}SO{sub 4} and Na{sub 2}S was superior to that of the addition of S. In bottom ash, different metallic sulfides were found in the forms of lead sulfide, aluminosilicate minerals, and polymetallic-sulfides, which were minimally volatilized. The chemical equilibrium calculations indicated that sulfur stabilizes Pb in the form of PbSO{sub 4}(s) at low temperatures (<1000 K). The equilibrium calculation prediction also suggested that SiO{sub 2}, CaO, TiO{sub 2}, and Al{sub 2}O{sub 3} containing materials function as condensed phase solids in the temperature range of 800–1100 K as sorbents to stabilize Pb. However, in the presence of sulfur or chlorine or the co-existence of sulfur and chlorine, these sorbents were inactive. The effect of sulfur on Pb partitioning in the sludge incineration process mainly depended on the gas phase reaction, the surface reaction, the volatilization of products, and
Almatani, Turki; Hugtenburg, Richard P; Lewis, Ryan D; Barley, Susan E; Edwards, Mark A
2016-10-01
Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning. A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared. Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed -0.46% dose difference with 8 h operator time while the automated MLT algorithm showed -1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm. The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the
NASA Technical Reports Server (NTRS)
Susko, M.; Hill, C. K.; Kaufman, J. W.
1974-01-01
The quantitative estimates are presented of pollutant concentrations associated with the emission of the major combustion products (HCl, CO, and Al2O3) to the lower atmosphere during normal launches of the space shuttle. The NASA/MSFC Multilayer Diffusion Model was used to obtain these calculations. Results are presented for nine sets of typical meteorological conditions at Kennedy Space Center, including fall, spring, and a sea-breeze condition, and six sets at Vandenberg AFB. In none of the selected typical meteorological regimes studied was a 10-min limit of 4 ppm exceeded.
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
NASA Technical Reports Server (NTRS)
Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard
1991-01-01
Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.
Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M
2014-08-01
The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia ( I ) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼10 4 m 2 to ∼10 7 m 2 . Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼10 2 m 2 . We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m -2 K -1 s -1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars.
A Bayesian-Based EDA Tool for Nano-circuits Reliability Calculations
NASA Astrophysics Data System (ADS)
Ibrahim, Walid; Beiu, Valeriu
As the sizes of (nano-)devices are aggressively scaled deep into the nanometer range, the design and manufacturing of future (nano-)circuits will become extremely complex and inevitably will introduce more defects while their functioning will be adversely affected by transient faults. Therefore, accurately calculating the reliability of future designs will become a very important aspect for (nano-)circuit designers as they investigate several design alternatives to optimize the trade-offs between the conflicting metrics of area-power-energy-delay versus reliability. This paper introduces a novel generic technique for the accurate calculation of the reliability of future nano-circuits. Our aim is to provide both educational and research institutions (as well as the semiconductor industry at a later stage) with an accurate and easy to use tool for closely comparing the reliability of different design alternatives, and for being able to easily select the design that best fits a set of given (design) constraints. Moreover, the reliability model generated by the tool should empower designers with the unique opportunity of understanding the influence individual gates play on the design’s overall reliability, and identifying those (few) gates which impact the design’s reliability most significantly.
Martínez, G M; Rennó, N; Fischer, E; Borlina, C S; Hallet, B; de la Torre Juárez, M; Vasavada, A R; Ramos, M; Hamilton, V; Gomez-Elvira, J; Haberle, R M
2014-01-01
The analysis of the surface energy budget (SEB) yields insights into soil-atmosphere interactions and local climates, while the analysis of the thermal inertia (I) of shallow subsurfaces provides context for evaluating geological features. Mars orbital data have been used to determine thermal inertias at horizontal scales of ∼104 m2 to ∼107 m2. Here we use measurements of ground temperature and atmospheric variables by Curiosity to calculate thermal inertias at Gale Crater at horizontal scales of ∼102 m2. We analyze three sols representing distinct environmental conditions and soil properties, sol 82 at Rocknest (RCK), sol 112 at Point Lake (PL), and sol 139 at Yellowknife Bay (YKB). Our results indicate that the largest thermal inertia I = 452 J m−2 K−1 s−1/2 (SI units used throughout this article) is found at YKB followed by PL with I = 306 and RCK with I = 295. These values are consistent with the expected thermal inertias for the types of terrain imaged by Mastcam and with previous satellite estimations at Gale Crater. We also calculate the SEB using data from measurements by Curiosity's Rover Environmental Monitoring Station and dust opacity values derived from measurements by Mastcam. The knowledge of the SEB and thermal inertia has the potential to enhance our understanding of the climate, the geology, and the habitability of Mars. PMID:26213666
NASA Astrophysics Data System (ADS)
Giannoglou, V.; Stylianidis, E.
2016-06-01
Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
Quantifying the economic value and quality of life impact of earlier influenza vaccination.
Lee, Bruce Y; Bartsch, Sarah M; Brown, Shawn T; Cooley, Philip; Wheaton, William D; Zimmerman, Richard K
2015-03-01
Influenza vaccination is administered throughout the influenza disease season, even as late as March. Given such timing, what is the value of vaccinating the population earlier than currently being practiced? We used real data on when individuals were vaccinated in Allegheny County, Pennsylvania, and the following 2 models to determine the value of vaccinating individuals earlier (by the end of September, October, and November): Framework for Reconstructing Epidemiological Dynamics (FRED), an agent-based model (ABM), and FluEcon, our influenza economic model that translates cases from the ABM to outcomes and costs [health care and lost productivity costs and quality-adjusted life-years (QALYs)]. We varied the reproductive number (R0) from 1.2 to 1.6. Applying the current timing of vaccinations averted 223,761 influenza cases, $16.3 million in direct health care costs, $50.0 million in productivity losses, and 804 in QALYs, compared with no vaccination (February peak, R0 1.2). When the population does not have preexisting immunity and the influenza season peaks in February (R0 1.2-1.6), moving individuals who currently received the vaccine after September to the end of September could avert an additional 9634-17,794 influenza cases, $0.6-$1.4 million in direct costs, $2.1-$4.0 million in productivity losses, and 35-64 QALYs. Moving the vaccination of just children to September (R0 1.2-1.6) averted 11,366-1660 influenza cases, $0.6-$0.03 million in direct costs, $2.3-$0.2 million in productivity losses, and 42-8 QALYs. Moving the season peak to December increased these benefits, whereas increasing preexisting immunity reduced these benefits. Even though many people are vaccinated well after September/October, they likely are still vaccinated early enough to provide substantial cost-savings.
Aljasser, Faisal; Vitevitch, Michael S
2018-02-01
A number of databases (Storkel Behavior Research Methods, 45, 1159-1167, 2013) and online calculators (Vitevitch & Luce Behavior Research Methods, Instruments, and Computers, 36, 481-487, 2004) have been developed to provide statistical information about various aspects of language, and these have proven to be invaluable assets to researchers, clinicians, and instructors in the language sciences. The number of such resources for English is quite large and continues to grow, whereas the number of such resources for other languages is much smaller. This article describes the development of a Web-based interface to calculate phonotactic probability in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at http://phonotactic.drupal.ku.edu/ .
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206-12 Section 600.206-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST...
NASA Astrophysics Data System (ADS)
Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.
2017-06-01
The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.
NASA Astrophysics Data System (ADS)
Hammitzsch, M.; Spazier, J.; Reißland, S.
2014-12-01
Usually, tsunami early warning and mitigation systems (TWS or TEWS) are based on several software components deployed in a client-server based infrastructure. The vast majority of systems importantly include desktop-based clients with a graphical user interface (GUI) for the operators in early warning centers. However, in times of cloud computing and ubiquitous computing the use of concepts and paradigms, introduced by continuously evolving approaches in information and communications technology (ICT), have to be considered even for early warning systems (EWS). Based on the experiences and the knowledge gained in three research projects - 'German Indonesian Tsunami Early Warning System' (GITEWS), 'Distant Early Warning System' (DEWS), and 'Collaborative, Complex, and Critical Decision-Support in Evolving Crises' (TRIDEC) - new technologies are exploited to implement a cloud-based and web-based prototype to open up new prospects for EWS. This prototype, named 'TRIDEC Cloud', merges several complementary external and in-house cloud-based services into one platform for automated background computation with graphics processing units (GPU), for web-mapping of hazard specific geospatial data, and for serving relevant functionality to handle, share, and communicate threat specific information in a collaborative and distributed environment. The prototype in its current version addresses tsunami early warning and mitigation. The integration of GPU accelerated tsunami simulation computations have been an integral part of this prototype to foster early warning with on-demand tsunami predictions based on actual source parameters. However, the platform is meant for researchers around the world to make use of the cloud-based GPU computation to analyze other types of geohazards and natural hazards and react upon the computed situation picture with a web-based GUI in a web browser at remote sites. The current website is an early alpha version for demonstration purposes to give the
SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation
Wang, H; Barbee, D; Wang, W
Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less
Calculation of the Strip Foundation on Solid Elastic Base, Taking into Account the Karst Collapse
NASA Astrophysics Data System (ADS)
Sharapov, R.; Lodigina, N.
2017-07-01
Karst processes greatly complicate the construction and operation of buildings and structures. Due to the karstic deformations at different times there have been several major accidents, which analysis showed that in all cases the fundamental errors committed at different stages of building development: site selection, engineering survey, design, construction or operation of the facilities. Theory analysis of beams on elastic foundation is essential in building practice. Specialist engineering facilities often have to resort to multiple designing in finding efficient forms of construction of these facilities. In work the calculation of stresses in cross-sections of the strip foundation evenly distributed load in the event of karst. A comparison of extreme stress in the event of karst and without accounting for the strip foundation as a beam on an elastic foundation.
Optimization of air plasma reconversion of UF6 to UO2 based on thermodynamic calculations
NASA Astrophysics Data System (ADS)
Tundeshev, Nikolay; Karengin, Alexander; Shamanin, Igor
2018-03-01
The possibility of plasma-chemical conversion of depleted uranium-235 hexafluoride (DUHF) in air plasma in the form of gas-air mixtures with hydrogen is considered in the paper. Calculation of burning parameters of gas-air mixtures is carried out and the compositions of mixtures obtained via energy-efficient conversion of DUHF in air plasma are determined. With the help of plasma-chemical conversion, thermodynamic modeling optimal composition of UF6-H2-Air mixtures and its burning parameters, the modes for production of uranium dioxide in the condensed phase are determined. The results of the conducted researches can be used for creation of technology for plasma-chemical conversion of DUHF in the form of air-gas mixtures with hydrogen.
First-principles calculations of CdS-based nanolayers and nanotubes
NASA Astrophysics Data System (ADS)
Bandura, A. V.; Kuruch, D. D.; Evarestov, R. A.
2018-05-01
The first-principles simulations using hybrid exchange-correlation density functional and localized atomic basis set were performed to investigate the properties of CdS nanolayers and nanotubes constructed from wurtzite and zinc blende phases. Different types of cylindrical and facetted nanotubes have been considered. The new classification of the facetted nanotubes is proposed. The stability of CdS nanotubes has been analyzed using formation and strain energies. Obtained results show that facetted tubes are favorable as compared to the most of cylindrical ones. Nevertheless, the cylindrical nanotubes generated from the layers with experimentally proved freestanding existence, also have a chance to be synthesized. Preliminary calculation of facetted nanotubes constructed from the zinc blende phase gives evidence for their possible using in the photocatalytic decomposition of water.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zhang, Hongpeng
2017-12-01
A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.
A chirality-based metrics for free-energy calculations in biomolecular systems.
Pietropaolo, Adriana; Branduardi, Davide; Bonomi, Massimiliano; Parrinello, Michele
2011-09-01
In this work, we exploit the chirality index introduced in (Pietropaolo et al., Proteins 2008, 70, 667) as an effective descriptor of the secondary structure of proteins to explore their complex free-energy landscape. We use the chirality index as an alternative metrics in the path collective variables (PCVs) framework and we show in the prototypical case of the C-terminal domain of immunoglobulin binding protein GB1 that relevant configurations can be efficiently sampled in combination with well-tempered metadynamics. While the projections of the configurations found onto a variety of different descriptors are fully consistent with previously reported calculations, this approach provides a unifying perspective of the folding mechanism which was not possible using metadynamics with the previous formulation of PCVs. Copyright © 2011 Wiley Periodicals, Inc.
TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors
Mitchell, T; Bush, K
Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identifymore » the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.« less
NASA Astrophysics Data System (ADS)
Kuang, Ye; Zhao, Chun Sheng; Zhao, Gang; Tao, Jiang Chuan; Xu, Wanyun; Ma, Nan; Bian, Yu Xuan
2018-05-01
Water condensed on ambient aerosol particles plays significant roles in atmospheric environment, atmospheric chemistry and climate. Before now, no instruments were available for real-time monitoring of ambient aerosol liquid water contents (ALWCs). In this paper, a novel method is proposed to calculate ambient ALWC based on measurements of a three-wavelength humidified nephelometer system, which measures aerosol light scattering coefficients and backscattering coefficients at three wavelengths under dry state and different relative humidity (RH) conditions, providing measurements of light scattering enhancement factor f(RH). The proposed ALWC calculation method includes two steps: the first step is the estimation of the dry state total volume concentration of ambient aerosol particles, Va(dry), with a machine learning method called random forest model based on measurements of the dry
nephelometer. The estimated Va(dry) agrees well with the measured one. The second step is the estimation of the volume growth factor Vg(RH) of ambient aerosol particles due to water uptake, using f(RH) and the Ångström exponent. The ALWC is calculated from the estimated Va(dry) and Vg(RH). To validate the new method, the ambient ALWC calculated from measurements of the humidified nephelometer system during the Gucheng campaign was compared with ambient ALWC calculated from ISORROPIA thermodynamic model using aerosol chemistry data. A good agreement was achieved, with a slope and intercept of 1.14 and -8.6 µm3 cm-3 (r2 = 0.92), respectively. The advantage of this new method is that the ambient ALWC can be obtained solely based on measurements of a three-wavelength humidified nephelometer system, facilitating the real-time monitoring of the ambient ALWC and promoting the study of aerosol liquid water and its role in atmospheric chemistry, secondary aerosol formation and climate change.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.
2016-12-01
Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.
Calculation of Shuttle Base Heating Environments and Comparison with Flight Data
NASA Technical Reports Server (NTRS)
Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.
1983-01-01
The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
Bergstra, A; van Dijk, R B; Hillege, H L; Lie, K I; Mook, G A
1995-05-01
This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left and right heart catheterization, the oxygen consumption VO2 (ml.min-1) was calculated using Fick's principle. Either pulmonary or systemic flow, as measured by dye dilution, was used in combination with the concordant arteriovenous oxygen concentration difference. In 130 patients, who matched the age of the LaFarge and Miettinen population, the obtained values of oxygen consumption VO2(dd) were compared with the estimated oxygen consumption values VO2(lfm), found using the LaFarge and Miettinen formulae. The VO2(lfm) was significantly lower than VO2(dd); -21.8 +/- 29.3 ml.min-1 (mean +/- SD), P < 0.001, 95% confidence interval (95% CI) -26.9 to -16.7, limits of agreement (LA) -80.4 to 36.9. A new regression formula for the assumed oxygen consumption VO2(ass) was derived in 250 patients by stepwise multiple regression analysis. The VO2(dd) was used as a dependent variable, and body surface area BSA (m2). Sex (0 for female, 1 for male), Age (years), Heart rate (min-1) and the presence of a left to right shunt as independent variables. The best fitting formula is expressed as: VO2(ass) = (157.3 x BSA + 10.0 x Sex - 10.5 x In Age + 4.8) ml.min-1, where ln Age = the natural logarithm of the age. This formula was validated prospectively in 60 patients. A non-significant difference between VO2(ass) and VO2(dd) was found; mean 2.0 +/- 23.4 ml.min-1, P = 0.771, 95% Cl = -4.0 to +8.0, LA -44.7 to +48.7. In conclusion, assumed oxygen consumption values, using our new formula, are in better agreement with the actual values than those found according to LaFarge and Miettinen's formulae.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift
Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.
2010-12-07
This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mmmore » were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.« less
Highly correlated configuration interaction calculations on water with large orbital bases
Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx
2014-05-14
A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less
Electronegativity calculation of bulk modulus and band gap of ternary ZnO-based alloys
Li, Keyan; Kang, Congying; Xue, Dongfeng, E-mail: dongfeng@ciac.jl.cn
2012-10-15
In this work, the bulk moduli and band gaps of M{sub x}Zn{sub 1−x}O (M = Be, Mg, Ca, Cd) alloys in the whole composition range were quantitatively calculated by using the electronegativity-related models for bulk modulus and band gap, respectively. We found that the change trends of bulk modulus and band gap with an increase of M concentration x are same for Be{sub x}Zn{sub 1−x}O and Cd{sub x}Zn{sub 1−x}O, while the change trends are reverse for Mg{sub x}Zn{sub 1−x}O and Ca{sub x}Zn{sub 1−x}O. It was revealed that the bulk modulus is related to the valence electron density of atoms whereasmore » the band gap is strongly influenced by the detailed chemical bonding behaviors of constituent atoms. The current work provides us a useful guide to compositionally design advanced alloy materials with both good mechanical and optoelectronic properties.« less
NASA Astrophysics Data System (ADS)
Mattsson, Ann E.; Wixom, Ryan R.; Mattsson, Thomas R.
2011-06-01
Density Functional Theory (DFT) has become a crucial tool for understanding the behavior of matter. The ability to perform high-fidelity calculations is most important for cases where experiments are impossible, dangerous, and/or prohibitively expensive to perform. For molecular crystals, successful use of DFT has been hampered by an inability to correctly describe the van der Waals' dominated equilibrium state. We have explored a way of bypassing this problem by using the Armiento-Mattsson 2005 (AM05) exchange-correlation functional. This functional is highly accurate for a wide range of solids, in particular in compression. Another advantage is that AM05 does not include any van der Waals' attraction. We will demonstrate the method on the PETN Hugoniot, and discuss our confidence in the results and ongoing research aimed at improvement. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Calculation of benefit reserves based on true m-thly benefit premiums
NASA Astrophysics Data System (ADS)
Riaman; Susanti, Dwi; Supriatna, Agus; Nurani Ruchjana, Budi
2017-10-01
Life insurance is a form of insurance that provides risk mitigation in life or death of a human. One of its advantages is measured life insurance. Insurance companies ought to give a sum of money as reserves to the customers. The benefit reserves are an alternative calculation which involves net and cost premiums. An insured may pay a series of benefit premiums to an insurer equivalent, at the date of policy issue, to the sum of to be paid on the death of the insured, or on survival of the insured to the maturity date. A balancing item is required and this item is a liability for one of the parties and the other is an asset. The balancing item, in loan, is the outstanding principle, an asset for the lender and the liability for the borrower. In this paper we examined the benefit reserves formulas corresponding to the formulas for true m-thly benefit premiums by the prospective method. This method specifies that, the reserves at the end of the first year are zero. Several principles can be used for the determined of benefit premiums, an equivalence relation is established in our discussion.
NASA Astrophysics Data System (ADS)
Hidayat, S.; Riveli, N.
2018-05-01
We have calculated 2D photonic crystal band gap using plane-wave expansion method. The studied model of structures is hexagonal lattice and square lattice of rod cylinder in air. We have simulated the dispersion relation of it structure using hybrid polymer as rod material. The parameter structures are nrod = 1.5, nhole = 1, and rrod = 0.25a, where a is lattice constant. We found the distributed feedback occurs at the edge of upper band or frequency at 0.66 (a/λ). In our experimental work, we have successfully fabricated the 2D photonic crystal from hybrid polymer incorporated with organic dye laser. The lasing characteristics ware investigated using strip-line excitation light of SHG Nd-YAG laser (λ=532 nm). The lasing wavelengths for hexagonal structure are observed at 606 nm and 621 nm for photonic crystal period of 400 nm and 410 nm, respectively. λ=532 nm). Whereas the square structure, the lasing wavelengths are observed at (588 nm ± 2) and (606 nm ± 2 nm) for grating period of 391 nm and 405 nm.
Electric bicycle cost calculation models and analysis based on the social perspective in China.
Yan, Xuetong; He, Jie; King, Mark; Hang, Wen; Zhou, Bojian
2018-05-10
Electric bicycles (EBs) are increasingly popular around the world. In April 2014, EB ownership in China reached 181 million. While some aspects of the impact of EBs have been studied, most of the literature analyzing the cost of EBs has been conducted from the buyer's point of view and the perspective of social cost has not been covered, which is therefore the focus of this paper. From the consumer's point of view, only the costs paid from purchase until retirement are included in the cost of EBs, i.e., the EB acquisition cost, battery replacement cost, charging cost, and repair and maintenance cost are included. Considered from the perspective of the social cost (including impact on the environment), costs that are not paid directly by consumers should also be included in the cost of EBs, i.e., the lead-acid battery scrap processing cost, the cost of pollution caused by wastewater, and the traffic-related costs. Data are obtained from secondary sources and surveys, and calculations demonstrate that in the life cycle of an EB, the consumer cost is 6386.2 CNY, the social cost is 10,771.2 CNY, and the ratio of consumer to social cost is 1:1.69. By comparison, the ratio for motor vehicles is 1:1.06, so that the share of the life cycle cost of EBs that is not borne by the consumer is much higher than that for motor vehicles, which needs to be addressed.
CT-based MCNPX dose calculations for gynecology brachytherapy employing a Henschke applicator
NASA Astrophysics Data System (ADS)
Yu, Pei-Chieh; Nien, Hsin-Hua; Tung, Chuan-Jong; Lee, Hsing-Yi; Lee, Chung-Chi; Wu, Ching-Jung; Chao, Tsi-Chian
2017-11-01
The purpose of this study is to investigate the dose perturbation caused by the metal ovoid structures of a Henschke applicator using Monte Carlo simulation in a realistic phantom. The Henschke applicator has been widely used for gynecologic patients treated by brachytherapy in Taiwan. However, the commercial brachytherapy planning system (BPS) did not properly evaluate the dose perturbation caused by its metal ovoid structures. In this study, Monte Carlo N-Particle Transport Code eXtended (MCNPX) was used to evaluate the brachytherapy dose distribution of a Henschke applicator embedded in a Plastic water phantom and a heterogeneous patient computed tomography (CT) phantom. The dose comparison between the MC simulations and film measurements for a Plastic water phantom with Henschke applicator were in good agreement. However, MC dose with the Henschke applicator showed significant deviation (-80.6%±7.5%) from those without Henschke applicator. Furthermore, the dose discrepancy in the heterogeneous patient CT phantom and Plastic water phantom CT geometries with Henschke applicator showed 0 to -26.7% dose discrepancy (-8.9%±13.8%). This study demonstrates that the metal ovoid structures of Henschke applicator cannot be disregard in brachytherapy dose calculation.
First-principles based calculation of the macroscopic α/β interface in titanium
Li, Dongdong; Key Lab of Nonferrous Materials of Ministry of Education, Central South University, Changsha 410083; Zhu, Lvqi
2016-06-14
The macroscopic α/β interface in titanium and titanium alloys consists of a ledge interface (112){sub β}/(01-10){sub α} and a side interface (11-1){sub β}/(2-1-10){sub α} in a zig-zag arrangement. Here, we report a first-principles study for predicting the atomic structure and the formation energy of the α/β-Ti interface. Both component interfaces were calculated using supercell models within a restrictive relaxation approach, with various staking sequences and high-symmetry parallel translations being considered. The ledge interface energy was predicted as 0.098 J/m{sup 2} and the side interface energy as 0.811 J/m{sup 2}. By projecting the zig-zag interface area onto the macroscopic broad face, the macroscopicmore » α/β interface energy was estimated to be as low as ∼0.12 J/m{sup 2}, which, however, is almost double the ad hoc value used in previous phase-field simulations.« less
2012-08-01
HMMWV for the current given inputs based on the current vehicle speed, acceleration pedal , and brake pedal . From this driver requested power at the...HMMWV engine, b) base HMMWV gear ratios of the 4 speed transmission, c) acceleration and brake pedal pressed for the hybrid vehicle and d) Torque...coefficient. µb: Threshold for detecting brake pedal pressed ? 2 tanE4FGH 2 tanE4 I [K ρ: Air mass density, ρ = ma/Va where ma is mass of air
Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B
2018-02-09
15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.
Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution
NASA Astrophysics Data System (ADS)
Plekhov, O. A.; Kostina, A. A.
2017-05-01
The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
Kalantzis, G; Leventouri, T; Tachibana, H
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less
[Calculation of optic system of superfine medical endoscopes based on gradient elements].
Díakonov, S Iu; Korolev, A V
1994-01-01
The application of gradient optic elements to rigid endoscopes decreases their diameter to 1.5-2.0 mm. The given mathematical dependences determine aperture and field characteristics, focus and focal segments, resolution of the optic systems based on gradient optics. Parameters of the gradient optic systems for superfine medical endoscopes are characterized and their practical application is shown.
Calculating the Entropy of Solid and Liquid Metals, Based on Acoustic Data
NASA Astrophysics Data System (ADS)
Tekuchev, V. V.; Kalinkin, D. P.; Ivanova, I. V.
2018-05-01
The entropies of iron, cobalt, rhodium, and platinum are studied for the first time, based on acoustic data and using the Debye theory and rigid-sphere model, from 298 K up to the boiling point. A formula for the melting entropy of metals is validated. Good agreement between the research results and the literature data is obtained.
NASA Technical Reports Server (NTRS)
Gamayunov, K. V.; Khazanov, G. V.
2007-01-01
We consider the effect of oblique EMIC waves on relativistic electron scattering in the outer radiation belt using simultaneous observations of plasma and wave parameters from CRRES. The main findings can be s ummarized as follows: 1. In 1comparison with field-aligned waves, int ermediate and highly oblique distributions decrease the range of pitc h-angles subject to diffusion, and reduce the local scattering rate b y an order of magnitude at pitch-angles where the principle absolute value of n = 1 resonances operate. Oblique waves allow the absolute va lue of n > 1 resonances to operate, extending the range of local pitc h-angle diffusion down to the loss cone, and increasing the diffusion at lower pitch angles by orders of magnitude; 2. The local diffusion coefficients derived from CRRES data are qualitatively similar to the local results obtained for prescribed plasma/wave parameters. Conseq uently, it is likely that the bounce-averaged diffusion coefficients, if estimated from concurrent data, will exhibit the dependencies similar to those we found for model calculations; 3. In comparison with f ield-aligned waves, intermediate and highly oblique waves decrease th e bounce-averaged scattering rate near the edge of the equatorial lo ss cone by orders of magnitude if the electron energy does not excee d a threshold (approximately equal to 2 - 5 MeV) depending on specified plasma and/or wave parameters; 4. For greater electron energies_ ob lique waves operating the absolute value of n > 1 resonances are more effective and provide the same bounce_averaged diffusion rate near the loss cone as fiel_aligned waves do.
Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M
2016-11-08
A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), devel-oped by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correc-tion. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2 × 2 cm2, 5 × 5 cm2, and 10 × 10 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2 × 2 cm2 field size where the CCC algorithm underestimated the depth dose by ~ 5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. © 2016 The Authors.
NASA Astrophysics Data System (ADS)
Kamata, S.
2017-12-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.
Yin, Delu; Yin, Tao; Yang, Huiming; Xin, Qianqian; Wang, Lihong; Li, Ninyan; Ding, Xiaoyan; Chen, Bowen
2016-12-07
A shortage of community health professionals has been a crucial issue hindering the development of CHS. Various methods have been established to calculate health workforce requirements. This study aimed to use an economic-research-based approach to calculate the number of community health professionals required to provide community health services in the Xicheng District of Beijing and then assess current staffing levels against this ideal. Using questionnaires, we collected relevant data from 14 community health centers in the Xicheng District, including resident population, number of different health services provided, and service volumes. Through 36 interviews with family doctors, nurses, and public health workers, and six focus groups, we were able to calculate the person-time (equivalent value) required for each community health service. Field observations were conducted to verify the duration. In the 14 community health centers in Xicheng District, 1752 health workers were found in our four categories, serving a population of 1.278 million. Total demand for the community health service outstripped supply for doctors, nurses, and public health workers, but not other professionals. The method suggested that to properly serve the study population an additional 64 family doctors, 40 nurses, and 753 public health workers would be required. Our calculations indicate that significant numbers of new health professionals are required to deliver community health services. We established time standards in minutes (equivalent value) for each community health service activity, which could be applied elsewhere in China by government planners and civil society advocates.
Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo
2017-02-20
We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Lam, Phil; Jaffray, David A.
2012-02-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken
2018-05-17
An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4
Analytical Calculation of Sensing Parameters on Carbon Nanotube Based Gas Sensors
Akbari, Elnaz; Buntat, Zolkafle; Ahmad, Mohd Hafizi; Enzevaee, Aria; Yousof, Rubiyah; Iqbal, Syed Muhammad Zafar; Ahmadi, Mohammad Taghi.; Sidik, Muhammad Abu Bakar; Karimi, Hediyeh
2014-01-01
Carbon Nanotubes (CNTs) are generally nano-scale tubes comprising a network of carbon atoms in a cylindrical setting that compared with silicon counterparts present outstanding characteristics such as high mechanical strength, high sensing capability and large surface-to-volume ratio. These characteristics, in addition to the fact that CNTs experience changes in their electrical conductance when exposed to different gases, make them appropriate candidates for use in sensing/measuring applications such as gas detection devices. In this research, a model for a Field Effect Transistor (FET)-based structure has been developed as a platform for a gas detection sensor in which the CNT conductance change resulting from the chemical reaction between NH3 and CNT has been employed to model the sensing mechanism with proposed sensing parameters. The research implements the same FET-based structure as in the work of Peng et al. on nanotube-based NH3 gas detection. With respect to this conductance change, the I–V characteristic of the CNT is investigated. Finally, a comparative study shows satisfactory agreement between the proposed model and the experimental data from the mentioned research. PMID:24658617
Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.
Brea, Oriana; Mó, Otilia; Yáñez, Manuel
2015-10-26
The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Skrzyński, Witold
2014-11-01
The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fitzgerald, Alex; Roy, James W.; Smith, James E.
2015-09-01
Elevated levels of nutrients, especially phosphorus, in urban streams can lead to eutrophication and general degradation of stream water quality. Contributions of phosphorus from groundwater have typically been assumed minor, though elevated concentrations have been associated with riparian areas and urban settings. The objective of this study was to investigate the importance of groundwater as a pathway for phosphorus and nitrogen input to a gaining urban stream. The stream at the 28-m study reach was 3-5 m wide and straight, flowing generally eastward, with a relatively smooth bottom of predominantly sand, with some areas of finer sediments and a few boulders. Temperature-based methods were used to estimate the groundwater flux distribution. Detailed concentration distributions in discharging groundwater were mapped using in-stream piezometers and diffusion-based peepers, and showed elevated levels of soluble reactive phosphorus (SRP) and ammonium compared to the stream (while nitrate levels were lower), especially along the south bank, where groundwater fluxes were lower and geochemically reducing conditions dominated. Field evidence suggests the ammonium may originate from nearby landfills, but that local sediments likely contribute the SRP. Ammonium and SRP mass discharges with groundwater were then estimated as the product of the respective concentration distributions and the groundwater flux distribution. These were determined as approximately 9 and 200 g d-1 for SRP and ammonium, respectively, which compares to stream mass discharges over the observed range of base flows of 20-1100 and 270-7600 g d-1, respectively. This suggests that groundwater from this small reach, and any similar areas along Dyment's Creek, has the potential to contribute substantially to the stream nutrient concentrations.
NASA Astrophysics Data System (ADS)
Chan, GuoXuan; Wang, Xin
2018-04-01
We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.
NASA Astrophysics Data System (ADS)
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2017-04-01
A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).
Porphyrin-based polymeric nanostructures for light harvesting applications: Ab initio calculations
NASA Astrophysics Data System (ADS)
Orellana, Walter
The capture and conversion of solar energy into electricity is one of the most important challenges to the sustainable development of mankind. Among the large variety of materials available for this purpose, porphyrins concentrate great attention due to their well-known absorption properties in the visible range. However, extended materials like polymers with similar absorption properties are highly desirable. In this work, we investigate the stability, electronic and optical properties of polymeric nanostructures based on free-base porphyrins and phthalocyanines (H2P, H2Pc), within the framework of the time-dependent density functional perturbation theory. The aim of this work is the stability, electronic, and optical characterization of polymeric sheets and nanotubes obtained from H2P and H2Pc monomers. Our results show that H2P and H2Pc sheets exhibit absorption bands between 350 and 400 nm, slightly different that the isolated molecules. However, the H2P and H2Pc nanotubes exhibit a wide absorption in the visible and near-UV range, with larger peaks at 600 and 700 nm, respectively, suggesting good characteristic for light harvesting. The stability and absorption properties of similar structures obtained from ZnP and ZnPc molecules is also discussed. Departamento de Ciencias Físicas, República 220, 037-0134 Santiago, Chile.
Computer-based training for improving mental calculation in third- and fifth-graders.
Caviola, Sara; Gerotto, Giulia; Mammarella, Irene C
2016-11-01
The literature on intervention programs to improve arithmetical abilities is fragmentary and few studies have examined training on the symbolic representation of numbers (i.e. Arabic digits). In the present research, three groups of 3rd- and 5th-grade schoolchildren were given training on mental additions: 76 were assigned to a computer-based strategic training (ST) group, 73 to a process-based training (PBT) group, and 71 to a passive control (PC) group. Before and after the training, the children were given a criterion task involving complex addition problems, a nearest transfer task on complex subtraction problems, two near transfer tasks on math fluency, and a far transfer task on numerical reasoning. Our results showed developmental differences: 3rd-graders benefited more from the ST, with transfer effects on subtraction problems and math fluency, while 5th-graders benefited more from the PBT, improving their response times in the criterion task. Developmental, clinical and educational implications of these findings are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul m
Code of Federal Regulations, 2013 CFR
2013-07-01
... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...
Code of Federal Regulations, 2011 CFR
2011-07-01
... exhaust emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy...
Code of Federal Regulations, 2013 CFR
2013-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
Code of Federal Regulations, 2014 CFR
2014-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
Code of Federal Regulations, 2011 CFR
2011-07-01
..., highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an...
Code of Federal Regulations, 2012 CFR
2012-07-01
... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...
Code of Federal Regulations, 2012 CFR
2012-07-01
... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...
Quantifying the Economic Value and Quality of Life Impact of Earlier Influenza Vaccination
Lee, Bruce Y.; Bartsch, Sarah M.; Brown, Shawn T.; Cooley, Philip; Wheaton, William D.; Zimmerman, Richard K.
2015-01-01
Background Influenza vaccination is administered throughout the influenza disease season, even as late as March. Given such timing, what is the value of vaccinating the population earlier than currently being practiced? Methods We used real data on when individuals were vaccinated in Allegheny County, Pennsylvania, and the following 2 models to determine the value of vaccinating individuals earlier (by the end of September, October, and November): Framework for Reconstructing Epidemiological Dynamics (FRED), an agent-based model (ABM), and FluEcon, our influenza economic model that translates cases from the ABM to outcomes and costs [health care and lost productivity costs and quality-adjusted life-years (QALYs)]. We varied the reproductive number (R0) from 1.2 to 1.6. Results Applying the current timing of vaccinations averted 223,761 influenza cases, $16.3 million in direct health care costs, $50.0 million in productivity losses, and 804 in QALYs, compared with no vaccination (February peak, R0 1.2). When the population does not have preexisting immunity and the influenza season peaks in February (R0 1.2–1.6), moving individuals who currently received the vaccine after September to the end of September could avert an additional 9634–17,794 influenza cases, $0.6–$1.4 million in direct costs, $2.1–$4.0 million in productivity losses, and 35–64 QALYs. Moving the vaccination of just children to September (R0 1.2–1.6) averted 11,366–1660 influenza cases, $0.6–$0.03 million in direct costs, $2.3–$0.2 million in productivity losses, and 42–8 QALYs. Moving the season peak to December increased these benefits, whereas increasing preexisting immunity reduced these benefits. Conclusion Even though many people are vaccinated well after September/October, they likely are still vaccinated early enough to provide substantial cost-savings. PMID:25590676
Modeling Calculation and Synthesis of Alumina Whiskers Based on the Vapor Deposition Process.
Gong, Wei; Li, Xiangcheng; Zhu, Boquan
2017-10-17
This study simulated the bulk structure and surface energy of Al₂O₃ based on the density of states (DOS) and studied the synthesis and microstructure of one-dimensional Al₂O₃ whiskers. The simulation results indicate that the (001) surface has a higher surface energy than the others. The growth mechanism of Al₂O₃ whiskers follows vapor-solid (VS) growth. For the (001) surface with the higher surface energy, the driving force of crystal growth would be more intense in the (001) plane, and the alumina crystal would tend to grow preferentially along the direction of the (001) plane from the tip of the crystal. The Al₂O₃ grows to the shape of whisker with [001] orientation, which is proved both through modeling and experimentation.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Matthews, Holly; Deakin, Jon; Rajab, May; Idris-Usman, Maryam
2017-01-01
The widespread introduction of artemisinin-based combination therapy has contributed to recent reductions in malaria mortality. Combination therapies have a range of advantages, including synergism, toxicity reduction, and delaying the onset of resistance acquisition. Unfortunately, antimalarial combination therapy is limited by the depleting repertoire of effective drugs with distinct target pathways. To fast-track antimalarial drug discovery, we have previously employed drug-repositioning to identify the anti-amoebic drug, emetine dihydrochloride hydrate, as a potential candidate for repositioned use against malaria. Despite its 1000-fold increase in in vitro antimalarial potency (ED50 47 nM) compared with its anti-amoebic potency (ED50 26–32 uM), practical use of the compound has been limited by dose-dependent toxicity (emesis and cardiotoxicity). Identification of a synergistic partner drug would present an opportunity for dose-reduction, thus increasing the therapeutic window. The lack of reliable and standardised methodology to enable the in vitro definition of synergistic potential for antimalarials is a major drawback. Here we use isobologram and combination-index data generated by CalcuSyn software analyses (Biosoft v2.1) to define drug interactivity in an objective, automated manner. The method, based on the median effect principle proposed by Chou and Talalay, was initially validated for antimalarial application using the known synergistic combination (atovaquone-proguanil). The combination was used to further understand the relationship between SYBR Green viability and cytocidal versus cytostatic effects of drugs at higher levels of inhibition. We report here the use of the optimised Chou Talalay method to define synergistic antimalarial drug interactivity between emetine dihydrochloride hydrate and atovaquone. The novel findings present a potential route to harness the nanomolar antimalarial efficacy of this affordable natural product. PMID:28257497
Earlier Snowmelt Changes the Ratio Between Early and Late Season Forest Productivity
NASA Astrophysics Data System (ADS)
Knowles, J. F.; Molotch, N. P.; Trujillo, E.; Litvak, M. E.
2017-12-01
Future projections of declining snowpack and increasing potential evaporation associated with climate warming are predicted to advance the timing of snowmelt in mountain ecosystems globally. This scenario has direct implications for snowmelt-driven forest productivity, but the net effect of temporally shifting moisture dynamics is unknown with respect to the annual carbon balance. Accordingly, this study uses both satellite- and tower-based observations to document the forest productivity response to snowpack and potential evaporation variability between 1989 and 2012 throughout the southern Rocky Mountain ecoregion, USA. These results show that a combination of low snow accumulation and record high potential evaporation in 2012 resulted in the 34-year minimum ecosystem productivity that could be indicative of future conditions. Moreover, early and late season productivity were significantly and inversely related, suggesting that future shifts toward earlier or reduced snowmelt could increase late-season moisture stress to vegetation and thus restrict productivity despite a longer growing season. This relationship was further subject to modification by summer precipitation, and the controls on the early/late season productivity ratio are explored within the context of ecosystem carbon storage in the future. Any perturbation to the carbon cycle at this scale represents a potential feedback to climate change since snow-covered forests represent an important global carbon sink.
Colvin, Daniel C.; Loveless, Mary E.; Does, Mark D.; Yue, Zou; Yankeelov, Thomas E.; Gore, John C.
2011-01-01
An improved method for detecting early changes in tumors in response to treatment, based on a modification of diffusion-weighted magnetic resonance imaging, has been demonstrated in an animal model. Early detection of therapeutic response in tumors is important both clinically and in pre-clinical assessments of novel treatments. Non-invasive imaging methods that can detect and assess tumor response early in the course of treatment, and before frank changes in tumor morphology are evident, are of considerable interest as potential biomarkers of treatment efficacy. Diffusion-weighted magnetic resonance imaging is sensitive to changes in water diffusion rates in tissues that result from structural variations in the local cellular environment, but conventional methods mainly reflect changes in tissue cellularity and do not convey information specific to micro-structural variations at sub-cellular scales. We implemented a modified imaging technique using oscillating gradients of the magnetic field for evaluating water diffusion rates over very short spatial scales that are more specific for detecting changes in intracellular structure that may precede changes in cellularity. Results from a study of orthotopic 9L gliomas in rat brains indicate that this method can detect changes as early as 24 hours following treatment with 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU), when conventional approaches do not find significant effects. These studies suggest that diffusion imaging using oscillating gradients may be used to obtain an earlier indication of treatment efficacy than previous magnetic resonance imaging methods. PMID:21190804
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
Number of Diverticulitis Episodes Before Resection and Factors Associated With Earlier Interventions
Simianu, Vlad V.; Fichera, Alessandro; Bastawrous, Amir L.; Davidson, Giana H.; Florence, Michael G.; Thirlby, Richard C.; Flum, David R.
2016-01-01
claims only, in 80.5% (2459 of 3054) if counting inpatient and outpatient claims only, and in 56.3% (1720 of 3054) if counting all types of claims. Based on all types of claims, patients having surgery after fewer than 3 episodes were of similar mean age compared with patients having delayed surgery (both 47.7 years, P = .91), were less likely to undergo laparoscopy (65.1% [1120 of 1720] vs 70.8% [944 of 1334], P = .001), and had more time between the last 2 episodes preceding surgery (157 vs 96 days, P < .001). Patients with health maintenance organization or capitated insurance plans had lower rates of early surgery (50.1% [247 of 493] vs 57.4% [1429 of 2490], P = .01) than those with other insurance plan types. CONCLUSIONS AND RELEVANCE After considering all types of diverticulitis claims, 56.3% (1720 of 3054) of elective resections for uncomplicated diverticulitis occurred after fewer than 3 episodes. Earlier surgery was not explained by younger age, laparoscopy, time between the last 2 episodes preceding surgery, or financial risk-bearing for patients. In delivering value-added surgical care, factors driving early, elective resection for diverticulitis need to be determined. PMID:26864286
Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph
2016-10-01
The purification of therapeutic proteins is a challenging task with immediate need for optimization. Besides other techniques, aqueous 2-phase extraction (ATPE) of proteins has been shown to be a promising alternative to cost-intensive state-of-the-art chromatographic protein purification. Most likely, to enable a selective extraction, protein partitioning has to be influenced using a displacement agent to isolate the target protein from the impurities. In this work, a new displacement agent (lithium bromide [LiBr]) allowing for the selective separation of the target protein IgG from human serum albumin (represents the impurity) within a citrate-polyethylene glycol (PEG) ATPS is presented. In order to characterize the displacement suitability of LiBr on IgG, the mutual influence of LiBr and the phase formers on the aqueous 2-phase system (ATPS) and partitioning is investigated. Using osmotic virial coefficients (B22 and B23) accessible by composition gradient multiangle light-scattering measurements, the precipitating effect of LiBr on both proteins and an estimation of both protein partition coefficients is estimated. The stabilizing effect of LiBr on both proteins was estimated based on B22 and experimentally validated within the citrate-PEG ATPS. Our approach contributes to an efficient implementation of ATPE within the downstream processing development of therapeutic proteins. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Saravanan, A. V. Sai; Abishek, B.; Anantharaj, R.
2018-04-01
The fundamental natures of the molecular level interaction and charge transfer between specific radioactive elements and ionic liquids of 1-butyl-3-methylimidazolium bis (trifluoromethylsulfonyl) imide ([BMIM]+[NTf2]-), 1-Butyl-3-methylimidazolium ethylsulfate ([BMIM]+[ES]-) and 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM]+[BF4]-) were investigated utilising HF theory and B3LYP hybrid DFT. The ambiguity in reaction mechanism of the interacting species dictates to employ Effective Core Potential (ECP) basis sets such as UGBS, SDD, and SDDAll to account for the relativistic effects of deep core electrons in the system involving potential, heavy and hazardous radioactive elements present in nuclear waste. The SCF energy convergence of each system validates the characterisation of the molecular orbitals as a linear combination of atomic orbitals utilising fixed MO coefficients and the optimized geometry of each system is visualised based on which Mulliken partial charge analysis is carried out to account for the polarising behaviour of the radioactive element and charge transfer between the IL phase by comparison with the bare IL species.
NASA Astrophysics Data System (ADS)
Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.
2018-03-01
A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
NASA Astrophysics Data System (ADS)
Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.
2015-02-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.
NASA Astrophysics Data System (ADS)
Rahayuningsih, M.; Kartijono, N. E.; Arifin, M. S.
2018-03-01
Increasing number of staffs and academicians as a result of UNNES's popularity becoming a favourite university in Indonesia has demanded more facilities to support the learning process, student activities and campus operations. This condition has declined forest covered area in the campus, even though. Optimum extent must be prevented to support ecological function in campus areas. This research is conducted to determine the optimum areas of needed campus's forest based on CO2 emissions in the UNNES area in Sekaran sub-district. The results showed that forest need for campus of UNNES in 2017 is 14.25 ha, but the existing area is only 13.103 ha. Campus forest in western campus area is sufficient to absorb CO2 emissions with forest availability is about 8,147 ha while forest requirement is about 4.47 ha. Campus forest in eastern campus area is not sufficient to absorb CO2 emissions. The need of campus forest in eastern campus area is much bigger that is 9,78 ha from campus forest which available is about 4,956 ha. The results of this study can be used as a reference in the development of green space both on campus and in the city of UNNES Semarang.
Molybdenum-99 production calculation analysis of SAMOP reactor based on thorium nitrate fuel
NASA Astrophysics Data System (ADS)
Syarip; Togatorop, E.; Yassar
2018-03-01
SAMOP (Subcritical Assembly for Molybdenum-99 Production) has the potential to use thorium as fuel to produce 99Mo after modifying the design, but the production performance has not been discovered yet. A study needs to be done to obtain the correlation between 99Mo production with the mixed fuel composition of uranium and with SAMOP power on the modified SAMOP design. The study aims to obtain the production of 99Mo based thorium nitrate fuel on SAMOP’s modified designs. Monte Carlo N-Particle eXtended (MCNPX) is required to simulate the operation of the assembly by varying the composition of the uranium-thorium nitrate mixed fuel, geometry and power fraction on the SAMOP modified designs. The burnup command on the MCNPX is used to confirm the 99Mo production result. The assembly is simulated to operate for 6 days with subcritical neutron multiplication factor (keff = 0.97-0.99). The neutron multiplication factor of the modified design (keff) is 0.97, the activity obtained from 99Mo is 18.58 Ci at 1 kW power operation.
Using Ab-Initio Calculations to Appraise Stm-Based - and Kink-Formation Energies
NASA Astrophysics Data System (ADS)
Feibelman, Peter J.
2001-03-01
Ab-initio total energies can and should be used to test the typically model-dependent results of interpreting STM morphologies. The benefits of such tests are illustrated here by ab-initio energies of step- and kink-formation on Pb and Pt(111) which show that the STM-based values of the kink energies must be revised. On Pt(111), the computed kink-energies for (100)- and (111)-microfacet steps are about 0.25 and 0.18 eV. These results imply a specific ratio of formation energies for the two step types, namely 1.14, in excellent agreement with experiment. If kink-formation actually cost the same energy on the two step types, an inference drawn from scanning probe observations of step wandering,(M. Giesen et al., Surf. Sci. 366, 229(1996).) this ratio ought to be 1. In the case of Pb(111), though computed energies to form (100)- and (111)-microfacet steps agree with measurement, the ab-initio kink-formation energies for the two step types, 41 and 60 meV, are 40-50% below experimental values drawn from STM images.(K. Arenhold et al., Surf. Sci. 424, 271(1999).) The discrepancy results from interpreting the images with a step-stiffness vs. kink-energy relation appropriate to (100) but not (111) surfaces. Good agreement is found when proper account of the trigonal symmetry of Pb(111) is taken in reinterpreting the step-stiffness data.
Lam, Marnix G E H; Louie, John D; Abdelmaksoud, Mohamed H K; Fisher, George A; Cho-Phan, Cheryl D; Sze, Daniel Y
2014-07-01
To calculate absorbed radiation doses in patients treated with resin microspheres prescribed by the body surface area (BSA) method and to analyze dose-response and toxicity relationships. A retrospective review was performed of 45 patients with colorectal carcinoma metastases who received single-session whole-liver resin microsphere radioembolization. Prescribed treatment activity was calculated using the BSA method. Liver volumes and whole-liver absorbed doses (D(WL)) were calculated. D(WL) was correlated with toxicity and radiographic and biochemical response. The standard BSA-based administered activity (range, 0.85-2.58 GBq) did not correlate with D(WL) (mean, 50.4 Gy; range, 29.8-74.7 Gy; r = -0.037; P = .809) because liver weight was highly variable (mean, 1.89 kg; range, 0.94-3.42 kg) and strongly correlated with D(WL) (r = -0.724; P < .001) but was not accounted for in the BSA method. Patients with larger livers were relatively underdosed, and patients with smaller livers were relatively overdosed. Patients who received D(WL) > 50 Gy experienced more toxicity and adverse events (> grade 2 liver toxicity, 46% vs 17%; P < .05) but also responded better to the treatment than patients who received D(WL)< 50 Gy (disease control, 88% vs 24%; P < .01). Using the standard BSA formula, the administered activity did not correlate with D(WL). Based on this short-term follow-up after salvage therapy in patients with late stage metastatic colorectal carcinoma, dose-response and dose-toxicity relationships support using a protocol based on liver volume rather than BSA to prescribe the administered activity. Copyright © 2014 SIR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
Hu, Long; Xu, Zhiyu; Hu, Boqin; Lu, Zhi John
2017-01-09
Recent genomic studies suggest that novel long non-coding RNAs (lncRNAs) are specifically expressed and far outnumber annotated lncRNA sequences. To identify and characterize novel lncRNAs in RNA sequencing data from new samples, we have developed COME, a coding potential calculation tool based on multiple features. It integrates multiple sequence-derived and experiment-based features using a decompose-compose method, which makes it more accurate and robust than other well-known tools. We also showed that COME was able to substantially improve the consistency of predication results from other coding potential calculators. Moreover, COME annotates and characterizes each predicted lncRNA transcript with multiple lines of supporting evidence, which are not provided by other tools. Remarkably, we found that one subgroup of lncRNAs classified by such supporting features (i.e. conserved local RNA secondary structure) was highly enriched in a well-validated database (lncRNAdb). We further found that the conserved structural domains on lncRNAs had better chance than other RNA regions to interact with RNA binding proteins, based on the recent eCLIP-seq data in human, indicating their potential regulatory roles. Overall, we present COME as an accurate, robust and multiple-feature supported method for the identification and characterization of novel lncRNAs. The software implementation is available at https://github.com/lulab/COME. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Yoshioka, Akio; Fukuzawa, Kaori; Mochizuki, Yuji; Yamashita, Katsumi; Nakano, Tatsuya; Okiyama, Yoshio; Nobusawa, Eri; Nakajima, Katsuhisa; Tanaka, Shigenori
2011-09-01
Ab initio electronic-state calculations for influenza virus hemagglutinin (HA) trimer complexed with Fab antibody were performed on the basis of the fragment molecular orbital (FMO) method at the second and third-order Møller-Plesset (MP2 and MP3) perturbation levels. For the protein complex containing 2351 residues and 36,160 atoms, the inter-fragment interaction energies (IFIEs) were evaluated to illustrate the effective interactions between all the pairs of amino acid residues. By analyzing the calculated data on the IFIEs, we first discussed the interactions and their fluctuations between multiple domains contained in the trimer complex. Next, by combining the IFIE data between the Fab antibody and each residue in the HA antigen with experimental data on the hemadsorption activity of HA mutants, we proposed a protocol to predict probable mutations in HA. The proposed protocol based on the FMO-MP2.5 calculation can explain the historical facts concerning the actual mutations after the emergence of A/Hong Kong/1/68 influenza virus with subtype H3N2, and thus provides a useful methodology to enumerate those residue sites likely to mutate in the future. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Shicheng; Tan, Likun; Hu, Nan; Grover, Hannah; Chu, Kevin; Chen, Zi
This reserach introduces a new numerical approach of calculating the post-buckling configuration of a thin rod embedded in elastic media. The theoretical base is the governing ODEs describing the balance of forces and moments, the length conservation, and the physics of bending and twisting by Laudau and Lifschitz. The numerical methods applied in the calculation are continuation method and Newton's method of iteration in combination with spectrum method. To the authors' knowledge, it is the first trial of directly applying the L-L theory to numerically studying the phenomenon of rod buckling in elastic medium. This method accounts for nonlinearity of geometry, thus is capable of calculating large deformation. The stability of this method is another advantage achieved by expressing the governing equations in a set of first-order derivative form. The wave length, amplitude, and decay effect all agree with the experiment without any further assumptions. This program can be applied to different occasions with varying stiffness of the elastic medai and rigidity of the rod.
Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force
NASA Astrophysics Data System (ADS)
Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.
2016-01-01
The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.
NASA Astrophysics Data System (ADS)
Dimitroulis, Christos; Raptis, Theophanes; Raptis, Vasilios
2015-12-01
We present an application for the calculation of radial distribution functions for molecular centres of mass, based on trajectories generated by molecular simulation methods (Molecular Dynamics, Monte Carlo). When designing this application, the emphasis was placed on ease of use as well as ease of further development. In its current version, the program can read trajectories generated by the well-known DL_POLY package, but it can be easily extended to handle other formats. It is also very easy to 'hack' the program so it can compute intermolecular radial distribution functions for groups of interaction sites rather than whole molecules.
Paratte, J.M.; Pelloni, S.; Grimm, P.
1991-04-01
This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.
Metamorphosis Is Ancestral for Crown Euarthropods, and Evolved in the Cambrian or Earlier.
Wolfe, Joanna M
2017-09-01
Macroevolutionary developmental biology employs fossilized ontogenetic data and phylogenetic comparative methods to probe the evolution of development at ancient nodes. Despite the prevalence of ecologically differentiated larval forms in marine invertebrates, it has been frequently presumed that the ancestors of arthropods were direct developers, and that metamorphosis may not have evolved until the Ordovician or later. Using fossils and new dated phylogenies, I infer that metamorphosis was likely ancestral for crown arthropods, contradicting this assumption. Based on a published morphological dataset encompassing 217 exceptionally preserved fossil and 96 extant taxa, fossils were directly incorporated into both the topology and age estimates, as in "tip dating" analyses. Using data from post-embryonic fossils representing 25 species throughout stem and crown arthropod lineages (as well as most of the 96 extant taxa), characters for metamorphosis were assigned based on inferred ecological changes in development (e.g., changes in habitat and adaptive landscape). Under all phylogenetic hypotheses, metamorphosis was supported as most likely ancestral to both ecdysozoans and euarthropods. Care must be taken to account for potential drastic post-embryonic morphological changes in evolutionary analyses. Many stem group euarthrpods may have had ecologically differentiated larval stages that did not preserve in the fossil record. Moreover, a complex life cycle and planktonic ecology may have evolved in the Ediacaran or earlier, and may have typified the pre-Cambrian explosion "wormworld" prior to the origin of crown group euarthropods. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Irikura, K.
2013-12-01
A probability of surface rupture is important to configure the seismic source, such as area sources or fault models, for a seismic hazard evaluation. In Japan, Takemura (1998) estimated the probability based on the historical earthquake data. Kagawa et al. (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated probability indicates a sigmoid curve and increases between Mj (the local magnitude defined and calculated by Japan Meteorological Agency) =6.5 and Mj=7.0. The probability of surface rupture is also used in a probabilistic fault displacement analysis (PFDHA). The probability is determined from the collected earthquake catalog, which were classified into two categories: with surface rupture or without surface rupture. The logistic regression is performed for the classified earthquake data. Youngs et al. (2003), Ross and Moss (2011) and Petersen et al. (2011) indicate the logistic curves of the probability of surface rupture by normal, reverse and strike-slip faults, respectively. Takao et al. (2013) shows the logistic curve derived from only Japanese earthquake data. The Japanese probability curve shows the sharply increasing in narrow magnitude range by comparison with other curves. In this study, we estimated the probability of surface rupture applying the logistic analysis to the surface displacement derived from a surface displacement calculation. A source fault was defined in according to the procedure of Kagawa et al. (2004), which determined a seismic moment from a magnitude and estimated the area size of the asperity and the amount of slip. Strike slip and reverse faults were considered as source faults. We applied Wang et al. (2003) for calculations. The surface displacements with defined source faults were calculated by varying the depth of the fault. A threshold value as 5cm of surface displacement was used to evaluate whether a surface rupture reach or do not reach to the surface. We carried out the
Fischer, Michael; Bell, Robert G
2014-10-21
The influence of the nature of the cation on the interaction of the silicoaluminophosphate SAPO-34 with small hydrocarbons (ethane, ethylene, acetylene, propane, propylene) is investigated using periodic density-functional theory calculations including a semi-empirical dispersion correction (DFT-D). Initial calculations are used to evaluate which of the guest-accessible cation sites in the chabazite-type structure is energetically preferred for a set of ten cations, which comprises four alkali metals (Li(+), Na(+), K(+), Rb(+)), three alkaline earth metals (Mg(2+), Ca(2+), Sr(2+)), and three transition metals (Cu(+), Ag(+), Fe(2+)). All eight cations that are likely to be found at the SII site (centre of a six-ring) are then included in the following investigation, which studies the interaction with the hydrocarbon guest molecules. In addition to the interaction energies, some trends and peculiarities regarding the adsorption geometries are analysed, and electron density difference plots obtained from the calculations are used to gain insights into the dominant interaction types. In addition to dispersion interactions, electrostatic and polarisation effects dominate for the main group cations, whereas significant orbital interactions are observed for unsaturated hydrocarbons interacting with transition metal (TM) cations. The differences between the interaction energies obtained for pairs of hydrocarbons of interest (such as ethylene-ethane and propylene-propane) deliver some qualitative insights: if this energy difference is large, it can be expected that the material will exhibit a high selectivity in the adsorption-based separation of alkene-alkane mixtures, which constitutes a problem of considerable industrial relevance. While the calculations show that TM-exchanged SAPO-34 materials are likely to exhibit a very high preference for alkenes over alkanes, the strong interaction may render an application in industrial processes impractical due to the large amount
Biological consequences of earlier snowmelt from desert dust deposition in alpine landscapes.
Steltzer, Heidi; Landry, Chris; Painter, Thomas H; Anderson, Justin; Ayres, Edward
2009-07-14
Dust deposition to mountain snow cover, which has increased since the late 19(th) century, accelerates the rate of snowmelt by increasing the solar radiation absorbed by the snowpack. Snowmelt occurs earlier, but is decoupled from seasonal warming. Climate warming advances the timing of snowmelt and early season phenological events (e.g., the onset of greening and flowering); however, earlier snowmelt without warmer temperatures may have a different effect on phenology. Here, we report the results of a set of snowmelt manipulations in which radiation-absorbing fabric and the addition and removal of dust from the surface of the snowpack advanced or delayed snowmelt in the alpine tundra. These changes in the timing of snowmelt were superimposed on a system where the timing of snowmelt varies with topography and has been affected by increased dust loading. At the community level, phenology exhibited a threshold response to the timing of snowmelt. Greening and flowering were delayed before seasonal warming, after which there was a linear relationship between the date of snowmelt and the timing of phenological events. Consequently, the effects of earlier snowmelt on phenology differed in relation to topography, which resulted in increasing synchronicity in phenology across the alpine landscape with increasingly earlier snowmelt. The consequences of earlier snowmelt from increased dust deposition differ from climate warming and include delayed phenology, leading to synchronized growth and flowering across the landscape and the opportunity for altered species interactions, landscape-scale gene flow via pollination, and nutrient cycling.
Augustine, Chad
Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGSmore » electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.« less
Kletsov, Aleksey A; Glukhovskoy, Evgeny G; Chumakov, Aleksey S; Ortiz, Joseph V
2016-01-01
The conduction properties of DNA molecule, particularly its transverse conductance (electron transfer through nucleotide bridges), represent a point of interest for DNA chemistry community, especially for DNA sequencing. However, there is no fully developed first-principles theory for molecular conductance and current that allows one to analyze the transverse flow of electrical charge through a nucleotide base. We theoretically investigate the transverse electron transport through all four DNA nucleotide bases by implementing an unbiased ab initio theoretical approach, namely, the electron propagator theory. The electrical conductance and current through DNA nucleobases (guanine [G], cytosine [C], adenine [A] and thymine [T]) inserted into a model 1-nm Ag-Ag nanogap are calculated. The magnitudes of the calculated conductance and current are ordered in the following hierarchies: gA>gG>gC>gT and IG>IA>IT>IC correspondingly. The new distinguishing parameter for the nucleobase identification is proposed, namely, the onset bias magnitude. Nucleobases exhibit the following hierarchy with respect to this parameter: Vonset(A)
NASA Astrophysics Data System (ADS)
Fontaine, G.; Dufour, P.; Chayer, P.; Dupuis, J.; Brassard, P.
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Inferences on the process based on diffusion timescale arguments make the implicit assumption that the concentration gradient of a given metal at the base of the convection zone is negligible. This assumption is, in fact, not rigorously valid, but it allows the decoupling of the surface abundance from the evolving distribution of a given metal in deeper layers. A better approach is a full time-dependent calculation of the evolution of the abundance profile of an accreting-diffusing element. We used the same approach as that developed by Dupuis et al. to model accretion episodes involving many more elements than those considered by these authors. Our calculations incorporate the improvements to diffusion physics mentioned in Paper I. The basic assumption in the Dupuis et al. approach is that the accreted metals are trace elements, i.e, that they have no effects on the background (DA or non-DA) stellar structure. This allows us to consider an arbitrary number of accreting elements.
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2017-10-01
A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Panni, Roheena Z; Ashfaq, Awais; Amanullah, Muhammad M
2011-12-29
Congenital heart disease (CHD) accounts for a major proportion of disease in the pediatric age group. The objective of the study was to estimate the cost of illness associated with CHD pre, intra and postoperatively; among patients referred to a tertiary care hospital in Karachi, Pakistan. This is the first study conducted to estimate the cost of managing CHD in Pakistan. A prevalence based cost of illness study design was used to estimate the cost of cardiac surgery (corrective & palliative) for congenital heart defects in children ≤ 5 years of age from June 2006 to June 2009. A total of 120 patients were enrolled after obtaining an informed consent and the data was collected using a pre-tested questionnaire. The mean age at the time of surgery in group A (1-12 mo age) was 6.08 ± 2.80 months and in group B (1-5 yrs) was 37.10 ± 19.94 months. The cost of surgical admission was found to be significantly higher in the older group, p = 0.001. The total number and cost of post-operative outpatient visits was also higher in group B, p = 0.003. Pre and post operative hospital admissions were not found to be significantly different among the two groups, p = 0.166 and 0.627, respectively. The number of complications were found to be different between the two groups (p = 0.019). Majority of these were contributed by hemorrhage and post-operative seizures. This study concluded that significant expenditure is incurred by people with CHD; with the implication that resources could be saved by earlier detection and awareness campaigns.
Clinical Evidence for the Earlier Initiation of Insulin Therapy in Type 2 Diabetes
2013-01-01
Abstract The natural history of type 2 diabetes mellitus (T2DM) is a relentless progression of β-cell failure and dysregulation of β-cell function with increasing metabolic derangement. Insulin remains the only glucose-lowering therapy that is efficacious throughout this continuum. However, the timing of introduction and the choice of insulin therapy remain contentious because of the heterogeneity of T2DM and the well-recognized behavioral and therapeutic challenges associated with this mode of therapy. Nevertheless, the early initiation of basal insulin has been shown to improve glycemic control and affect long-term outcomes in people with T2DM and is a treatment strategy supported by international guidelines as part of an individualized approach to chronic disease management. The rationale for early initiation of insulin is based on evidence demonstrating multifaceted benefits, including overcoming the glucotoxic effects of hyperglycemia, thereby facilitating “β-cell rest,” and preserving β-cell mass and function, while also improving insulin sensitivity. Independent of its effects on glycemic control, insulin possesses anti-inflammatory and antioxidant properties that may help protect against endothelial dysfunction and damage resulting in vascular disease. Insulin therapy and the achievement of good glycemic control earlier in T2DM provide long-term protection to end organs via “metabolic memory” regardless of subsequent treatments and degree of glycemic control. This is evidenced from long-term observations continuing from trials such as the United Kingdom Prospective Diabetes Study. As such, early initiation of insulin therapy may not only help to avoid the effects of prolonged glycemic burden, but may also positively alter the course of disease progression. PMID:23786228
Kornhuber, H H
1983-01-01
Data supporting the glutamate hypothesis of schizophrenia are presented. The glutamate hypothesis is linked to the dopamine hypothesis by the fact that dopamine synapses inhibit the release of glutamate in the striate and mesolimbic system. The glutamate hypothesis of schizophrenia may open a way to find better drugs for treatment. The concept of schizophrenia I is described. It consists of "negative symptoms" such as disconcentration or reduction of energy. Schizophrenia I precedes and follows schizophrenia II with "positive symptoms," e.g. hallucinations and delusions. Schizophrenia I so far cannot be diagnosed as schizophrenia unless schizophrenia II appears. Chemical, physiological or neuropsychological methods for the diagnosis of schizophrenia I would render an earlier treatment of schizophrenia possible and thus make social and occupational rehabilitation more efficient. An objective diagnosis of schizophrenia I may also elucidate the mode of genetic transmission of schizophrenia. Several neuropsychological methods distinguish schizophrenic patients as a group from normals. Some of them are based on a specific disturbance of long term concentration. The EEG also distinguishes schizophrenics from normals when analyzed during voluntary movement. For schizophrenics it takes more effort to initiate a voluntary movement, and there are several features of the EEG correlated to this. Moreover, the longer motor reaction time of schizophrenics is paralleled by a longer duration of the Bereitschaftspotential in schizophrenia. Furthermore, there is a difference in the theta rhythm between schizophrenic patients and normals in a task which requires concentration. Some of the children of schizophrenic parents show a disturbance of concentration in both reaction time tasks and the d 2 test.(ABSTRACT TRUNCATED AT 250 WORDS)
Later endogenous circadian temperature nadir relative to an earlier wake time in older people
NASA Technical Reports Server (NTRS)
Duffy, J. F.; Dijk, D. J.; Klerman, E. B.; Czeisler, C. A.
1998-01-01
The contribution of the circadian timing system to the age-related advance of sleep-wake timing was investigated in two experiments. In a constant routine protocol, we found that the average wake time and endogenous circadian phase of 44 older subjects were earlier than that of 101 young men. However, the earlier circadian phase of the older subjects actually occurred later relative to their habitual wake time than it did in young men. These results indicate that an age-related advance of circadian phase cannot fully account for the high prevalence of early morning awakening in healthy older people. In a second study, 13 older subjects and 10 young men were scheduled to a 28-h day, such that they were scheduled to sleep at many circadian phases. Self-reported awakening from scheduled sleep episodes and cognitive throughput during the second half of the wake episode varied markedly as a function of circadian phase in both groups. The rising phase of both rhythms was advanced in the older subjects, suggesting an age-related change in the circadian regulation of sleep-wake propensity. We hypothesize that under entrained conditions, these age-related changes in the relationship between circadian phase and wake time are likely associated with self-selected light exposure at an earlier circadian phase. This earlier exposure to light could account for the earlier clock hour to which the endogenous circadian pacemaker is entrained in older people and thereby further increase their propensity to awaken at an even earlier time.
Hattori, Yusuke; Ishibashi, Kohei; Noda, Takashi; Okamura, Hideo; Kanzaki, Hideaki; Anzai, Toshihisa; Yasuda, Satoshi; Kusano, Kengo
2017-09-01
We describe the case of a 37-year-old woman who presented with complete right bundle branch block and right axis deviation. She was admitted to our hospital due to severe heart failure and was dependent on inotropic agents. Cardiac resynchronization therapy was initiated but did not improve her condition. After the optimization of the pacing timing, we performed earlier right ventricular pacing, which led to an improvement of her heart failure. Earlier right ventricular pacing should be considered in patients with complete right bundle branch block and right axis deviation when cardiac resynchronization therapy is not effective.
Hofbauer, Julia; Kirisits, Christian; Resch, Alexandra; Xu, Yingjie; Sturdza, Alina; Pötter, Richard; Nesvacil, Nicole
2016-04-01
To analyze the impact of heterogeneity-corrected dose calculation on dosimetric quality parameters in gynecological and breast brachytherapy using Acuros, a grid-based Boltzmann equation solver (GBBS), and to evaluate the shielding effects of different cervix brachytherapy applicators. Calculations with TG-43 and Acuros were based on computed tomography (CT) retrospectively, for 10 cases of accelerated partial breast irradiation and 9 cervix cancer cases treated with tandem-ring applicators. Phantom CT-scans of different applicators (plastic and titanium) were acquired. For breast cases the V20Gyαβ3 to lung, the D0.1cm(3) , D1cm(3) , D2cm(3) to rib, the D0.1cm(3) , D1cm(3) , D10cm(3) to skin, and Dmax for all structures were reported. For cervix cases, the D0.1cm(3) , D2cm(3) to bladder, rectum and sigmoid, and the D50, D90, D98, V100 for the CTVHR were reported. For the phantom study, surrogates for target and organ at risk were created for a similar dose volume histogram (DVH) analysis. Absorbed dose and equivalent dose to 2 Gy fractionation (EQD2) were used for comparison. Calculations with TG-43 overestimated the dose for all dosimetric indices investigated. For breast, a decrease of ~8% was found for D10cm(3) to the skin and 5% for D2cm(3) to rib, resulting in a difference ~ -1.5 Gy EQD2 for overall treatment. Smaller effects were found for cervix cases with the plastic applicator, with up to -2% (-0.2 Gy EQD2) per fraction for organs at risk and -0.5% (-0.3 Gy EQD2) per fraction for CTVHR. The shielding effect of the titanium applicator resulted in a decrease of 2% for D2cm(3) to the organ at risk versus 0.7% for plastic. Lower doses were reported when calculating with Acuros compared to TG-43. Differences in dose parameters were larger in breast cases. A lower impact on clinical dose parameters was found for the cervix cases. Applicator material causes systematic shielding effects that can be taken into account.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han
2013-04-02
The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Lu, M. F.; Zhou, C. P.; Li, Q. Q.; Zhang, C. L.; Shi, H. F.
2018-01-01
In order to improve the photocatalytic activity under visible-light irradiation, we adopted first principle calculations based on density functional theory (DFT) to calculate the electronic structures of B site transition metal element doped InNbO4. The results indicated that the complete hybridization of Nb 4d states and some Ti 3d states contributed to the new conduction band of Ti doped InNbO4, barely changing the position of band edge. For Cr doping, some localized Cr 3d states were introduced into the band gap. Nonetheless, the potential of localized levels was too positive to cause visible-light reaction. When it came to Cu doping, the band gap was almost same with that of InNbO4 as well as some localized Cu 3d states appeared above the top of VB. The introduction of localized energy levels benefited electrons to migrate from valence band (VB) to conduction band (CB) by absorbing lower energy photons, realizing visible-light response.
NASA Astrophysics Data System (ADS)
Xu, Yi; Luo, Wen; Balabanski, Dimiter; Goriely, Stephane; Matei, Catalin; Tesileanu, Ovidiu
2017-09-01
The astrophysical p-process is an important way of nucleosynthesis to produce the stable and proton-rich nuclei beyond Fe which can not be reached by the s- and r-processes. In the present study, the astrophysical reaction rates of (γ,n), (γ,p), and (γ,α) reactions are computed within the modern reaction code TALYS for about 3000 stable and proton-rich nuclei with 12 < Z < 110. The nuclear structure ingredients involved in the calculation are determined from experimental data whenever available and, if not, from global microscopic nuclear models. In particular, both of the Wood-Saxon potential and the double folding potential with density dependent M3Y (DDM3Y) effective interaction are used for the calculations. It is found that the photonuclear reaction rates are very sensitive to the nuclear potential, and the better determination of nuclear potential would be important to reduce the uncertainties of reaction rates. Meanwhile, the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) facility is being developed, which will provide the great opportunity to experimentally study the photonuclear reactions in p-process. Simulations of the experimental setup for the measurements of the photonuclear reactions 96Ru(γ,p) and 96Ru(γ,α) are performed. It is shown that the experiments of photonuclear reactions in p-process based on ELI-NP are quite promising.
Niedz, Randall P.
2016-01-01
ARS-Media for Excel is an ion solution calculator that uses “Microsoft Excel” to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel’s Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems– 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line. PMID:27812202
Niedz, Randall P
2016-01-01
ARS-Media for Excel is an ion solution calculator that uses "Microsoft Excel" to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel's Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems- 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
NASA Astrophysics Data System (ADS)
Belal, Arafa A. M.; Zayed, M. A.; El-Desawy, M.; Rakha, Sh. M. A. H.
2015-03-01
Three Schiff's bases AI (2(1-hydrazonoethyl)phenol), AII (2, 4-dibromo 6-(hydrazonomethyl)phenol) and AIII (2(hydrazonomethyl)phenol) were prepared as new hydrazone compounds via condensation reactions with molar ratio (1:1) of reactants. Firstly by reaction of 2-hydroxy acetophenone solution and hydrazine hydrate; it gives AI. Secondly condensation between 3,5-dibromo-salicylaldehyde and hydrazine hydrate gives AII. Thirdly condensation between salicylaldehyde and hydrazine hydrate gives AIII. The structures of AI-AIII were characterized by elemental analysis (EA), mass (MS), FT-IR and 1H NMR spectra, and thermal analyses (TG, DTG, and DTA). The activation thermodynamic parameters, such as, ΔE∗, ΔH∗, ΔS∗ and ΔG∗ were calculated from the TG curves using Coats-Redfern method. It is important to investigate their molecular structures to know the active groups and weak bond responsible for their biological activities. Consequently in the present work, the obtained thermal (TA) and mass (MS) practical results are confirmed by semi-empirical MO-calculations (MOCS) using PM3 procedure. Their biological activities have been tested in vitro against Escherichia coli, Proteus vulgaris, Bacillissubtilies and Staphylococcus aurous bacteria in order to assess their anti-microbial potential.
40 CFR 87.21 - Exhaust emission standards for Tier 4 and earlier engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Emissions (New Aircraft Gas Turbine Engines) § 87.21 Exhaust emission standards for Tier 4 and earlier... standards. (a) Exhaust emissions of smoke from each new aircraft gas turbine engine of class T8 manufactured... from each new aircraft gas turbine engine of class TF and of rated output of 129 kilonewtons thrust or...
40 CFR 87.21 - Exhaust emission standards for Tier 4 and earlier engines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emissions (New Aircraft Gas Turbine Engines) § 87.21 Exhaust emission standards for Tier 4 and earlier... standards. (a) Exhaust emissions of smoke from each new aircraft gas turbine engine of class T8 manufactured... from each new aircraft gas turbine engine of class TF and of rated output of 129 kilonewtons thrust or...
Reading-Related Skills in Earlier- and Later-Schooled Children
ERIC Educational Resources Information Center
Cunningham, Anna J.; Carroll, Julia M.
2011-01-01
We investigate the effects of age-related factors and formal instruction on the development of reading-related skills in children aged 4 and 7 years. Age effects were determined by comparing two groups of children at the onset of formal schooling; one aged 7 (later-schooled) and one aged 4 (earlier-schooled). Schooling effects were measured by…
ERIC Educational Resources Information Center
Evans, Judy P.; Taylor, Jerome
1995-01-01
Reviews the theory of reasoned action to demonstrate how it can be applied to understanding gang violence, and illustrates its potential applicability to a pilot sample of 30 contemporary and 18 earlier gangs living in a large metropolitan community. Results indicate this theory has been helpful in explaining higher levels of violence in…
Earlier warning: a multi-indicator approach to monitoring trends in the illicit use of medicines.
Mounteney, Jane; Haugland, Siren
2009-03-01
The availability of medicines on the illicit drug market is currently high on the international policy agenda, linked to adverse health consequences including addiction, drug related overdoses and injection related problems. Continuous surveillance of illicit use of medicines allows for earlier identification and reporting of emerging trends and increased possibilities for earlier intervention to prevent spread of use and drug related harm. This paper aims to identify data sources capable of monitoring the illicit use of medicines; present trend findings for Rohypnol and Subutex using a multi-indicator monitoring approach; and consider the relevance of such models for policy makers. Data collection and analysis were undertaken in Bergen, Norway, using the Bergen Earlier Warning System (BEWS), a multi-indicator drug monitoring system. Data were gathered at six monthly intervals from April 2002 to September 2006. Drug indicator data from seizures, treatment, pharmacy sales, helplines, key informants and media monitoring were triangulated and an aggregated differential was used to plot trends. Results for the 4-year period showed a decline in the illicit use of Rohypnol and an increase in the illicit use of Subutex. Multi-indicator surveillance models can play a strategic role in the earlier identification and reporting of emerging trends in illicit use of medicines.
Hope, Kirsty; Durrheim, David N; Muscatello, David; Merritt, Tony; Zheng, Wei; Massey, Peter; Cashman, Patrick; Eastwood, Keith
2008-08-01
To retrospectively review the performance of a near real-time Emergency Department (ED) Syndromic Surveillance System operating in New South Wales for identifying pneumonia outbreaks of public health importance. Retrospective data was obtained from the NSW Emergency Department data collection for a rural hospital that has experienced a cluster of pneumonia diagnoses among teenage males in August 2006. ED standard reports were examined for signals in the overall count for each respiratory syndrome, and for elevated counts in individual subgroups including; age, sex and admission to hospital status. Using the current thresholds, the ED syndromic surveillance system would have trigged a signal for pneumonia syndrome in children aged 5-16 years four days earlier than the notification by a paediatrician and this signal was maintained for 14 days. If the ED syndromic surveillance system had been operating it could have identified the outbreak earlier than the paediatrician's notification. This may have permitted an earlier public health response. By understanding the behaviour of syndromes during outbreaks of public health importance, response protocols could be developed to facilitate earlier implementation of control measures.
Tidal Wave II Revisited: A Review of Earlier Enrollment Projections for California Higher Education.
ERIC Educational Resources Information Center
Hayward, Gerald C.; Breneman, David W.; Estrada, Leobardo F.
This report examined enrollment projections for higher education institutions in California in relation to earlier projections conducted in the mid-1990s that forecasted steep declines in enrollment. It notes that California's remarkable economic recovery over the last several years has allowed it to fund higher education enrollment growth at a…
Smoking is associated with earlier time to revision of total knee arthroplasty.
Lim, Chin Tat; Goodman, Stuart B; Huddleston, James I; Harris, Alex H S; Bhowmick, Subhrojyoti; Maloney, William J; Amanatullah, Derek F
2017-10-01
Smoking is associated with early postoperative complications, increased length of hospital stay, and an increased risk of revision after total knee arthroplasty (TKA). However, the effect of smoking on time to revision TKA is unknown. A total of 619 primary TKAs referred to an academic tertiary center for revision TKA were retrospectively stratified according to the patient smoking status. Smoking status was then analyzed for associations with time to revision TKA using a Chi square test. The association was also analyzed according to the indication for revision TKA. Smokers (37/41, 90%) have an increased risk of earlier revision for any reason compared to non-smokers (274/357, 77%, p=0.031). Smokers (37/41, 90%) have an increased risk of earlier revision for any reason compared to ex-smokers (168/221, 76%, p=0.028). Subgroup analysis did not reveal a difference in indication for revision TKA (p>0.05). Smokers are at increased risk of earlier revision TKA when compared to non-smokers and ex-smokers. The risk for ex-smokers was similar to that of non-smokers. Smoking appears to have an all-or-none effect on earlier revision TKA as patients who smoked more did not have higher risk of early revision TKA. These results highlight the need for clinicians to urge patients not to begin smoking and encourage smokers to quit smoking prior to primary TKA. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pappas, Eleftherios P.; Zoros, Emmanouil; Moutsatsos, Argyris; Peppa, Vasiliki; Zourari, Kyveli; Karaiskos, Pantelis; Papagiannis, Panagiotis
2017-05-01
There is an acknowledged need for the design and implementation of physical phantoms appropriate for the experimental validation of model-based dose calculation algorithms (MBDCA) introduced recently in 192Ir brachytherapy treatment planning systems (TPS), and this work investigates whether it can be met. A PMMA phantom was prepared to accommodate material inhomogeneities (air and Teflon), four plastic brachytherapy catheters, as well as 84 LiF TLD dosimeters (MTS-100M 1 × 1 × 1 mm3 microcubes), two radiochromic films (Gafchromic EBT3) and a plastic 3D dosimeter (PRESAGE). An irradiation plan consisting of 53 source dwell positions was prepared on phantom CT images using a commercially available TPS and taking into account the calibration dose range of each detector. Irradiation was performed using an 192Ir high dose rate (HDR) source. Dose to medium in medium, Dmm , was calculated using the MBDCA option of the same TPS as well as Monte Carlo (MC) simulation with the MCNP code and a benchmarked methodology. Measured and calculated dose distributions were spatially registered and compared. The total standard (k = 1) spatial uncertainties for TLD, film and PRESAGE were: 0.71, 1.58 and 2.55 mm. Corresponding percentage total dosimetric uncertainties were: 5.4-6.4, 2.5-6.4 and 4.85, owing mainly to the absorbed dose sensitivity correction and the relative energy dependence correction (position dependent) for TLD, the film sensitivity calibration (dose dependent) and the dependencies of PRESAGE sensitivity. Results imply a LiF over-response due to a relative intrinsic energy dependence between 192Ir and megavoltage calibration energies, and a dose rate dependence of PRESAGE sensitivity at low dose rates (<1 Gy min-1). Calculations were experimentally validated within uncertainties except for MBDCA results for points in the phantom periphery and dose levels <20%. Experimental MBDCA validation is laborious, yet feasible. Further
Kessler, Ronald C.; Aguilar-Gaxiola, Sergio; Alonso, Jordi; Bromet, Evelyn J.; Gureje, Oye; Karam, Elie G.; Koenen, Karestan C.; Lee, Sing; Liu, Howard; Pennell, Beth-Ellen; Petukhova, Maria V.; Sampson, Nancy A.; Shahly, Victoria L.; Stein, Dan J.; Atwoli, Lukoye; Borges, Guilherme; Bunting, Brendan; de Girolamo, Giovanni; Gluzman, Semyon; Haro, Josep Maria; Hinkov, Hristo; Kawakami, Norito; Kovess-Masfety, Viviane; Navarro-Mateu, Fernando; Posada-Villa, Jose; Scott, Kate M.; Shalev, Arieh Y.; Have, Margreet ten; Torres, Yolanda; Viana, Maria Carmen; Zaslavsky, Alan M.
2017-01-01
Although earlier trauma exposure is known to predict post-traumatic stress disorder (PTSD) after subsequent traumas, it is unclear if this association is limited to cases where the earlier trauma led to PTSD. Resolution of this uncertainty has important implications for research on pre-trauma vulnerability to PTSD. We examined this issue in the WHO World Mental Health (WMH) Surveys with 34,676 respondents who reported lifetime trauma exposure. One lifetime trauma was selected randomly for each respondent. DSM-IV PTSD due to that trauma was assessed. We reported in a previous paper that four earlier traumas involving interpersonal violence significantly predicted PTSD after subsequent random traumas (OR=1.3–2.5). We also assessed 14 lifetime DSM-IV mood, anxiety, disruptive behavior, and substance disorders prior to random traumas. We show in the current report that only prior anxiety disorders significantly predicted PTSD in a multivariate model (OR=1.5–4.3) and that these disorders interacted significantly with three of the earlier traumas (witnessing atrocities, physical violence victimization, rape). History of witnessing atrocities significantly predicted PTSD after subsequent random traumas only among respondents with prior PTSD (OR=5.6). Histories of physical violence victimization (OR=1.5) and rape after age 17 (OR=17.6) significantly predicted only among respondents with no history of prior anxiety disorders. Although only preliminary due to reliance on retrospective reports, these results suggest that history of anxiety disorders and history of a limited number of earlier traumas might usefully be targeted in future prospective studies as distinct foci of research on individual differences in vulnerability to PTSD after subsequent traumas. PMID:28924183
Kessler, R C; Aguilar-Gaxiola, S; Alonso, J; Bromet, E J; Gureje, O; Karam, E G; Koenen, K C; Lee, S; Liu, H; Pennell, B-E; Petukhova, M V; Sampson, N A; Shahly, V; Stein, D J; Atwoli, L; Borges, G; Bunting, B; de Girolamo, G; Gluzman, S F; Haro, J M; Hinkov, H; Kawakami, N; Kovess-Masfety, V; Navarro-Mateu, F; Posada-Villa, J; Scott, K M; Shalev, A Y; Ten Have, M; Torres, Y; Viana, M C; Zaslavsky, A M
2017-09-19
Although earlier trauma exposure is known to predict posttraumatic stress disorder (PTSD) after subsequent traumas, it is unclear whether this association is limited to cases where the earlier trauma led to PTSD. Resolution of this uncertainty has important implications for research on pretrauma vulnerability to PTSD. We examined this issue in the World Health Organization (WHO) World Mental Health (WMH) Surveys with 34 676 respondents who reported lifetime trauma exposure. One lifetime trauma was selected randomly for each respondent. DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, 4th Edition) PTSD due to that trauma was assessed. We reported in a previous paper that four earlier traumas involving interpersonal violence significantly predicted PTSD after subsequent random traumas (odds ratio (OR)=1.3-2.5). We also assessed 14 lifetime DSM-IV mood, anxiety, disruptive behavior and substance disorders before random traumas. We show in the current report that only prior anxiety disorders significantly predicted PTSD in a multivariate model (OR=1.5-4.3) and that these disorders interacted significantly with three of the earlier traumas (witnessing atrocities, physical violence victimization and rape). History of witnessing atrocities significantly predicted PTSD after subsequent random traumas only among respondents with prior PTSD (OR=5.6). Histories of physical violence victimization (OR=1.5) and rape after age 17 years (OR=17.6) significantly predicted only among respondents with no history of prior anxiety disorders. Although only preliminary due to reliance on retrospective reports, these results suggest that history of anxiety disorders and history of a limited number of earlier traumas might usefully be targeted in future prospective studies as distinct foci of research on individual differences in vulnerability to PTSD after subsequent traumas.Molecular Psychiatry advance online publication, 19 September 2017; doi:10.1038/mp.2017.194.
Cardiac Complications, Earlier Treatment, and Initial Disease Severity in Kawasaki Disease.
Abrams, Joseph Y; Belay, Ermias D; Uehara, Ritei; Maddox, Ryan A; Schonberger, Lawrence B; Nakamura, Yosikazu
2017-09-01
To assess if observed higher observed risks of cardiac complications for patients with Kawasaki disease (KD) treated earlier may reflect bias due to confounding from initial disease severity, as opposed to any negative effect of earlier treatment. We used data from Japanese nationwide KD surveys from 1997 to 2004. Receipt of additional intravenous immunoglobulin (IVIG) (data available all years) or any additional treatment (available for 2003-2004) were assessed as proxies for initial disease severity. We determined associations between earlier or later IVIG treatment (defined as receipt of IVIG on days 1-4 vs days 5-10 of illness) and cardiac complications by stratifying by receipt of additional treatment or by using logistic modeling to control for the effect of receiving additional treatment. A total of 48 310 patients with KD were included in the analysis. In unadjusted analysis, earlier IVIG treatment was associated with a higher risk for 4 categories of cardiac complications, including all major cardiac complications (risk ratio, 1.10; 95% CI, 1.06-1.15). Stratifying by receipt of additional treatment removed this association, and earlier IVIG treatment became protective against all major cardiac complications when controlling for any additional treatment in logistic regressions (OR, 0.90; 95% CI, 0.80-1.00). Observed higher risks of cardiac complications among patients with KD receiving IVIG treatment on days 1-4 of the illness are most likely due to underlying higher initial disease severity, and patients with KD should continue to be treated with IVIG as early as possible. Published by Elsevier Inc.
Interaction of curcumin with Zn(II) and Cu(II) ions based on experiment and theoretical calculation
NASA Astrophysics Data System (ADS)
Zhao, Xue-Zhou; Jiang, Teng; Wang, Long; Yang, Hao; Zhang, Sui; Zhou, Ping
2010-12-01
Curcumin and its complexes with Zn 2+ and Cu 2+ ions were synthesized and characterized by elemental analysis, mass spectroscopy, IR spectroscopy, UV spectroscopy, solution 1H and solid-state 13C NMR spectroscopy, EPR spectroscopy. In addition, the density functional theory (DFT)-based UV and 13C chemical shift calculations were also performed to view insight into those compound structures and properties. The results show that curcumin easily chelate the metal ions, such as Zn 2+ and Cu 2+, and the Cu(II)-curcumin complex has an ability to scavenge free-radicals. We demonstrated the differences between Zn(II)-curcumin and Cu(II)-curcumin complexes in structure and properties, enhancing the comprehensions about the curcumin roles in the Alzhermer's disease treatment.
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Raghuwanshi, Sanjeev Kumar
2016-06-01
The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.
NASA Astrophysics Data System (ADS)
Jiang, Teng; Wang, Long; Zhang, Sui; Sun, Ping-Chuan; Ding, Chuan-Fan; Chu, Yan-Qiu; Zhou, Ping
2011-10-01
Curcumin has been recognized as a potential natural drug to treat the Alzheimer's disease (AD) by chelating baleful metal ions, scavenging radicals and preventing the amyloid β (Aβ) peptides from the aggregation. In this paper, Al(III)-curcumin complexes with Al(III) were synthesized and characterized by liquid-state 1H, 13C and 27Al nuclear magnetic resonance (NMR), mass spectroscopy (MS), ultraviolet spectroscopy (UV) and generalized 2D UV-UV correlation spectroscopy. In addition, the density functional theory (DFT)-based UV and chemical shift calculations were also performed to view insight into the structures and properties of curcumin and its complexes. It was revealed that curcumin could interact strongly with Al(III) ion, and form three types of complexes under different molar ratios of [Al(III)]/[curcumin], which would restrain the interaction of Al(III) with the Aβ peptide, reducing the toxicity effect of Al(III) on the peptide.
NASA Astrophysics Data System (ADS)
Babür, Banu; Seferoğlu, Nurgül; Seferoğlu, Zeynel
2018-06-01
A novel coumarin based fluorescence anion chemosensor (P-1) bearing pyrazolone as a receptoric part was synthesized and characterized by using FT-IR, 1H/13C NMR and HRMS for the purpose of recognition of anions in DMSO. P-1 has four tautomeric structures and the most stable tautomeric form of P-1 was determined experimentally and theoretically. The chemosensor P-1 consists two receptoric parts as free amide Nsbnd H and enamine Nsbnd H which is stabilized intramolecular H-bonding with coumarin carbonyl oxygen. P-1 interacts selectively with fluoride anion via amide Nsbnd H. The selectivity and sensitivity of probe to various anions were determined with spectrophotometric and 1H NMR titration techniques as experimentally and all results were also explained by theoretical calculations.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Band alignment at the CdS/FeS2 interface based on the first-principles calculation
NASA Astrophysics Data System (ADS)
Ichimura, Masaya; Kawai, Shoichi
2015-03-01
FeS2 is potentially well-suited for the absorber layer of a thin-film solar cell. Since it usually has p-type conductivity, a pn heterojunction cell can be fabricated by combining it with an n-type material. In this work, the band alignment in the heterostructure based on FeS2 is investigated on the basis of the first-principles calculation. CdS, the most popular buffer-layer material for thin-film solar cells, is selected as the partner in the heterostructure. The results indicate that there is a large conduction band offset (0.65 eV) at the interface, which will hinder the flow of photogenerated electrons from FeS2 to CdS. Thus an n-type material with the conduction band minimum positioned lower than that of CdS will be preferable as the partner in the heterostructure.
Yasuda, H., E-mail: yasuda@nict.go.jp; Hosako, I.
2015-03-16
We investigate the performance of terahertz quantum cascade lasers (THz-QCLs) based on Al{sub x}Ga{sub 1−x}As/Al{sub y}Ga{sub 1−y}As and GaSb/AlGaSb material systems to realize higher-temperature operation. Calculations with the non-equilibrium Green's function method reveal that the AlGaAs-well-based THz-QCLs do not show improved performance, mainly because of alloy scattering in the ternary compound semiconductor. The GaSb-based THz-QCLs offer clear advantages over GaAs-based THz-QCLs. Weaker longitudinal optical phonon–electron interaction in GaSb produces higher peaks in the spectral functions of the lasing levels, which enables more electrons to be accumulated in the upper lasing level.
Kapanen, Mika K.; Hyödynmaa, Simo J.; Wigren, Tuija K.; Pitkänen, Maunu A.
2014-01-01
achieved, but 2%/2 mm threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20‐25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type ‘a’ and/or type ‘b’ algorithms for the most recent generation type ‘c’ algorithms, such as the AXB algorithm. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.K‐, 87.55.kd, 87.55.Qr PMID:24710454
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
Gómez-Campos, Rossana; Andruske, Cynthia Lee; de Arruda, Miguel; Urra Albornoz, Camilo; Cossio-Bolaños, Marco
2017-01-01
Background The Dual Energy X-Ray Absorptiometry (DXA) is the gold standard for measuring BMD and bone mineral content (BMC). In general, DXA is ideal for pediatric use. However, the development of specific standards for particular geographic regions limits its use and application for certain socio-cultural contexts. Additionally, the anthropometry may be a low cost and easy to use alternative method in epidemiological contexts. The goal of our study was to develop regression equations for predicting bone health of children and adolescents based on anthropometric indicators to propose reference values based on age and sex. Methods 3020 students (1567 males and 1453 females) ranging in ages 4.0 to 18.9 were studied from the Maule Region (Chile). Anthropometric variables evaluated included: weight, standing height, sitting height, forearm length, and femur diameter. A total body scan (without the head) was conducted by means of the Dual Energy X-Ray Absorptiometry. Bone mineral density (BMD) and the bone mineral content (BMC) were also determined. Calcium consumption was controlled for by recording the intake of the three last days prior to the evaluation. Body Mass Index (BMI) was calculated, and somatic maturation was determined by using the years of peak growth rate (APHV). Results Four regression models were generated to calculate bone health: for males BMD = (R2 = 0.79) and BMC = (R2 = 0.84) and for the females BMD = (R2 = 0.76) and BMC = (R2 = 0.83). Percentiles were developed by using the LMS method (p3, p5, p15, p25, p50, p75, p85, p95 and p97). Conclusions Regression equations and reference curves were developed to assess the bone health of Chilean children and adolescents. These instruments help identify children with potential underlying problems in bone mineralization during the growth stage and biological maturation. PMID:28759569
Gómez-Campos, Rossana; Andruske, Cynthia Lee; Arruda, Miguel de; Urra Albornoz, Camilo; Cossio-Bolaños, Marco
2017-01-01
The Dual Energy X-Ray Absorptiometry (DXA) is the gold standard for measuring BMD and bone mineral content (BMC). In general, DXA is ideal for pediatric use. However, the development of specific standards for particular geographic regions limits its use and application for certain socio-cultural contexts. Additionally, the anthropometry may be a low cost and easy to use alternative method in epidemiological contexts. The goal of our study was to develop regression equations for predicting bone health of children and adolescents based on anthropometric indicators to propose reference values based on age and sex. 3020 students (1567 males and 1453 females) ranging in ages 4.0 to 18.9 were studied from the Maule Region (Chile). Anthropometric variables evaluated included: weight, standing height, sitting height, forearm length, and femur diameter. A total body scan (without the head) was conducted by means of the Dual Energy X-Ray Absorptiometry. Bone mineral density (BMD) and the bone mineral content (BMC) were also determined. Calcium consumption was controlled for by recording the intake of the three last days prior to the evaluation. Body Mass Index (BMI) was calculated, and somatic maturation was determined by using the years of peak growth rate (APHV). Four regression models were generated to calculate bone health: for males BMD = (R2 = 0.79) and BMC = (R2 = 0.84) and for the females BMD = (R2 = 0.76) and BMC = (R2 = 0.83). Percentiles were developed by using the LMS method (p3, p5, p15, p25, p50, p75, p85, p95 and p97). Regression equations and reference curves were developed to assess the bone health of Chilean children and adolescents. These instruments help identify children with potential underlying problems in bone mineralization during the growth stage and biological maturation.
Auch, Alexander F; Klenk, Hans-Peter; Göker, Markus
2010-01-28
DNA-DNA hybridization (DDH) is a widely applied wet-lab technique to obtain an estimate of the overall similarity between the genomes of two organisms. To base the species concept for prokaryotes ultimately on DDH was chosen by microbiologists as a pragmatic approach for deciding about the recognition of novel species, but also allowed a relatively high degree of standardization compared to other areas of taxonomy. However, DDH is tedious and error-prone and first and foremost cannot be used to incrementally establish a comparative database. Recent studies have shown that in-silico methods for the comparison of genome sequences can be used to replace DDH. Considering the ongoing rapid technological progress of sequencing methods, genome-based prokaryote taxonomy is coming into reach. However, calculating distances between genomes is dependent on multiple choices for software and program settings. We here provide an overview over the modifications that can be applied to distance methods based in high-scoring segment pairs (HSPs) or maximally unique matches (MUMs) and that need to be documented. General recommendations on determining HSPs using BLAST or other algorithms are also provided. As a reference implementation, we introduce the GGDC web server (http://ggdc.gbdp.org).
NASA Astrophysics Data System (ADS)
Drablia, S.; Boukhris, N.; Boulechfar, R.; Meradji, H.; Ghemid, S.; Ahmed, R.; Omran, S. Bin; El Haj Hassan, F.; Khenata, R.
2017-10-01
The alkaline earth metal chalcogenides are being intensively investigated because of their advanced technological applications, for example in photoluminescent devices. In this study, the structural, electronic, thermodynamic and thermal properties of the BaSe1-x Te x alloys at alloying composition x = 0, 0.25, 0.50, 0.75 and 1 are investigated. The full potential linearized augmented plane wave plus local orbital method designed within the density functional theory was used to perform the total energy calculations. In this research work the effect of the composition on the results of the parameters and bulk modulus as well as on the band gap energy is analyzed. From our results, we found a deviation of the obtained results for the lattice constants from Vegard’s law as well as a deviation of the value of the bulk modulus from the linear concentration dependence. We also carried out a microscopic analysis of the origin of the band gap energy bowing parameter. Furthermore, the thermodynamic stability of the considered alloys was explored through the measurement of the miscibility critical temperature. The quasi-harmonic Debye model, as implemented in the Gibbs code, was used to predict the thermal properties of the BaSe1-x Te x alloys, and these investigations comprise our first theoretical predictions concerning the BaSe1-x Te x alloys.
Park, Justin C; Li, Jonathan G; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray
2015-04-01
The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm(2) square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm(2), where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a volumetric modulated arc
Na, Y; Kapp, D; Kim, Y
2014-06-01
Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computationalmore » efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer
Earlier reperfusion in patients with ST-elevation Myocardial infarction by use of helicopter
2012-01-01
Background In patients with ST-elevation myocardial infarction (STEMI) reperfusion therapy should be initiated as soon as possible. This study evaluated whether use of a helicopter for transportation of patients is associated with earlier initiation of reperfusion therapy. Material and methods A prospective study was conducted, including patients with STEMI and symptom duration less than 12 hours, who had primary percutaneous coronary intervention (PPCI) performed at Aarhus University Hospital in Skejby. Patients with a health care system delay (time from emergency call to first coronary intervention) of more than 360 minutes were excluded. The study period ran from 1.1.2011 until 31.12.2011. A Western Denmark Helicopter Emergency Medical Service (HEMS) project was initiated 1.6.2011 for transportation of patients with time-critical illnesses, including STEMI. Results The study population comprised 398 patients, of whom 376 were transported by ambulance Emergency Medical Service (EMS) and 22 by HEMS. Field-triage directly to the PCI-center was used in 338 of patients. The median system delay was 94 minutes among those field-triaged, and 168 minutes among those initially admitted to a local hospital. Patients transported by EMS and field-triaged were stratified into four groups according to transport distance from the scene of event to the PCI-center: ≤25 km., 26–50 km., 51–75 km. and > 75 km. For these groups, the median system delay was 78, 89, 99, and 141 minutes. Among patients transported by HEMS and field-triaged the estimated median transport distance by ground transportation was 115 km, and the observed system delay was 107 minutes. Based on second order polynomial regression, it was estimated that patients with a transport distance of >60 km to the PCI-center may benefit from helicopter transportation, and that transportation by helicopter is associated with a system delay of less than 120 minutes even at a transport distance up to 150 km
Earlier reperfusion in patients with ST-elevation myocardial infarction by use of helicopter.
Knudsen, Lars; Stengaard, Carsten; Hansen, Troels Martin; Lassen, Jens Flensted; Terkelsen, Christian Juhl
2012-10-04
In patients with ST-elevation myocardial infarction (STEMI) reperfusion therapy should be initiated as soon as possible. This study evaluated whether use of a helicopter for transportation of patients is associated with earlier initiation of reperfusion therapy. A prospective study was conducted, including patients with STEMI and symptom duration less than 12 hours, who had primary percutaneous coronary intervention (PPCI) performed at Aarhus University Hospital in Skejby. Patients with a health care system delay (time from emergency call to first coronary intervention) of more than 360 minutes were excluded. The study period ran from 1.1.2011 until 31.12.2011. A Western Denmark Helicopter Emergency Medical Service (HEMS) project was initiated 1.6.2011 for transportation of patients with time-critical illnesses, including STEMI. The study population comprised 398 patients, of whom 376 were transported by ambulance Emergency Medical Service (EMS) and 22 by HEMS. Field-triage directly to the PCI-center was used in 338 of patients. The median system delay was 94 minutes among those field-triaged, and 168 minutes among those initially admitted to a local hospital. Patients transported by EMS and field-triaged were stratified into four groups according to transport distance from the scene of event to the PCI-center: ≤25 km., 26-50 km., 51-75 km. and > 75 km. For these groups, the median system delay was 78, 89, 99, and 141 minutes. Among patients transported by HEMS and field-triaged the estimated median transport distance by ground transportation was 115 km, and the observed system delay was 107 minutes. Based on second order polynomial regression, it was estimated that patients with a transport distance of >60 km to the PCI-center may benefit from helicopter transportation, and that transportation by helicopter is associated with a system delay of less than 120 minutes even at a transport distance up to 150 km. The present study indicates that use of a
Zhang, Xueli; Gong, Xuedong
2014-08-04
Nitrogen-rich heterocyclic bases and oxygen-rich acids react to produce energetic salts with potential application in the field of composite explosives and propellants. In this study, 12 salts formed by the reaction of the bases 4-amino-1,2,4-trizole (A), 1-amino-1,2,4-trizole (B), and 5-aminotetrazole (C), upon reaction with the acids HNO3 (I), HN(NO2 )2 (II), HClO4 (III), and HC(NO2 )3 (IV), are studied using DFT calculations at the B97-D/6-311++G** level of theory. For the reactions with the same base, those of HClO4 are the most exothermic and spontaneous, and the most negative Δr Gm in the formation reaction also corresponds to the highest decomposition temperature of the resulting salt. The ability of anions and cations to form hydrogen bonds decreases in the order NO3 (-) >N(NO2 )2 (-) >ClO4 (-) >C(NO2 )3 (-) , and C(+) >B(+) >A(+) . In particular, those different cation abilities are mainly due to their different conformations and charge distributions. For the salts with the same anion, the larger total hydrogen-bond energy (EH,tot ) leads to a higher melting point. The order of cations and anions on charge transfer (q), second-order perturbation energy (E2 ), and binding energy (Eb ) are the same to that of EH,tot , so larger q leads to larger E2 , Eb , and EH,tot . All salts have similar frontier orbitals distributions, and their HOMO and LUMO are derived from the anion and the cation, respectively. The molecular orbital shapes are kept as the ions form a salt. To produce energetic salts, 5-aminotetrazole and HClO4 are the preferred base and acid, respectively. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2018-04-01
The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte
2016-01-01
The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. In the spring of 2014, the students' dean commissioned the "core group" for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and - in analogy to impact factors and third party grants - a ranking among faculty. Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.
Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D
2018-05-18
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power
NASA Astrophysics Data System (ADS)
Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie
2018-05-01
One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.
Flacco, A.; Fairchild, M.; Reiche, S.
2004-12-07
The coherent radiation emitted by electrons in high brightness beam-based experiments is important from the viewpoints of both radiation source development, and the understanding and diagnosing the basic physical processes important in beam manipulations at high intensity. While much theoretical work has been developed to aid in calculating aspects of this class of radiation, these methods do not often produce accurate information concerning the experimentally relevant aspects of the radiation. At UCLA, we are particularly interested in coherent synchrotron radiation and the related phenomena of coherent edge radiation, in the context of a fs-beam chicane compression experiment at the BNLmore » ATF. To analyze this and related problems, we have developed a program that acts as an extension to the Lienard-Wiechert-based 3D simulation code TREDI, termed FieldEye. This program allows the evaluation of electromagnetic fields in the time and frequency domain in an arbitrary 2D detector planar area. We discuss here the implementation of the FieldEye code, and give examples of results relevant to the case of the ATF chicane compressor experiment.« less
Wilkinson, P L
1979-06-01
Assessing and modifying oxygen transport are major parts of ICU patient management. Determination of base excess, blood oxygen saturation and content, dead space ventilation, and P50 helps in this management. A program is described for determining these variables using a T1 59 programmable calculator and PC 100A printer. Each variable can be independently calculated without running the whole program. The calculator-printer's small size, low cost, and hard copy printout make it a valuable and versatile tool for calculating physiological variables. The program is easily entered by an stored on magnetic card, and prompts the user to enter the appropriate variables, making is easy to run by untrained personnel.
NASA Astrophysics Data System (ADS)
Araszkiewicz, Andrzej; Jarosiński, Marek
2013-04-01
In this research we aimed to check if the GPS observations can be used for calculation of a reliable deformation pattern of the intracontinental lithosphere in seismically inactive areas, such as territory of Poland. For this purpose we have used data mainly from the ASG-EUPOS permanent network and the solutions developed by the MUT CAG team (Military University of Technology: Centre of Applied Geomatics). From the 128 analyzed stations almost 100 are mounted on buildings. Daily observations were processed in the Bernese 5.0 software and next the weekly solutions were used to determine the station velocities expressed in ETRF2000. The strain rates were determined for almost 200 triangles with GPS stations in their corners plotted used Delaunay triangulation. The obtained scattered directions of deformations and highly changeable values of strain rates point to insufficient antennas' stabilization as for geodynamical studies. In order to depict badly stabilized stations we carried out a benchmark test to show what might be the effect of one station drift on deformations in contacting triangles. Based on the benchmark results, from our network we have eliminated the stations which showed deformation pattern characteristic for instable station. After several rounds of strain rate calculations and eliminations of dubious points we have reduced the number of stations down to 60. The refined network revealed more consistent deformation pattern across Poland. Deformations compared with the recent stress field of the study area disclosed good correlation in some places and significant discrepancies in the others, which will be the subject of future research.
Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E
2018-06-12
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).
Hilton, C; Fisher, W; Lopez, A; Sanders, C
1997-09-01
To design and test a simple, easily modifiable system for calculating faculty productivity in teaching, research, administration, and patient care in which all areas of endeavor would be recognized and high productivity in one area would produce results similar to high productivity in another at the Louisiana State University School of Medicine in New Orleans. A relative-value and time-based system was designed in 1996 so that similar efforts in the four areas would produce similar scores, and a profile reflecting the authors' estimates of high productivity ("super faculty") was developed for each area. The activity profiles of 17 faculty members were used to test the system. "Super-faculty" scores in all areas were similar. The faculty members' mean scores were higher for teaching and research than for administration and patient care, and all four mean scores were substantially lower than the respective totals for the "super faculty". In each category the scores of those faculty members who scored above the mean in that category were used to calculate new mean scores. The mean scores for these faculty members were similar to those for the "super faculty" in teaching and research but were substantially lower for administration and patient care. When the mean total score of the eight faculty members predicted to have total scores below the group mean was compared with the mean total score of the nine faculty members predicted to have total scores above the group mean, the difference was significant (p < .0001). For the former, every score in each category was below the mean, with the exception of one faculty member's score in one category. Of the latter, eight had higher scores in teaching and four had higher scores in teaching and research combined. This system provides a quantitative method for the equal recognition of faculty productivity in a number of areas, and it may be useful as a starting point for other academic units exploring similar issues.
A Hepatocellular Carcinoma Case in a Patient Who had Immunity to Hepatitis B Virus Earlier.
Ates, Ihsan; Kaplan, Mustafa; Demirci, Selim; Altiparmak, Emin
2016-01-01
Hepatocellular carcinoma (HCC) is the most common malignant tumor of the liver. Hepatitis B virus infection is one of the most important etilogical factors of HCC. In this case report, a patient with HCC previously infected and having ongoing immunity against hepatitis B virus will be discussed. Ates I, Kaplan M, Demirci S, Altiparmak E. A Hepatocellular Carcinoma Case in a Patient Who had Immunity to Hepatitis B Virus Earlier. Euroasian J Hepato-Gastroenterol 2016;6(1):82-83.
Paulsen, Jane S.; Nance, Martha; Kim, Ji-In; Carlozzi, Noelle E.; Panegyres, Peter K.; Erwin, Cheryl; Goh, Anita; McCusker, Elizabeth; Williams, Janet K.
2013-01-01
The past decade has witnessed an explosion of evidence suggesting that many neurodegenerative diseases can be detected years, if not decades, earlier than previously thought. To date, these scientific advances have not provoked any parallel translational or clinical improvements. There is an urgency to capitalize on this momentum so earlier detection of disease can be more readily translated into improved health-related quality of life for families at risk for, or suffering with, neurodegenerative diseases. In this review, we discuss health-related quality of life (HRQOL) measurement in neurodegenerative diseases and the importance of these “patient reported outcomes” for all clinical research. Next, we address HRQOL following early identification or predictive genetic testing in some neurodegenerative diseases: Huntington disease, Alzheimer's disease, Parkinson's disease, Dementia with Lewy bodies, frontotemporal dementia, amyotrophic lateral sclerosis, prion diseases, hereditary ataxias, Dentatorubral-pallidoluysian atrophy and Wilson's disease. After a brief report of available direct-to-consumer genetic tests, we address the juxtaposition of earlier disease identification with assumed reluctance towards predictive genetic testing. Forty-one studies examining health related outcomes following predictive genetic testing for neurodegenerative disease suggested that (a) extreme or catastrophic outcomes are rare; (b) consequences commonly include transiently increased anxiety and/or depression; (c) most participants report no regret; (d) many persons report extensive benefits to receiving genetic information; and (e) stigmatization and discrimination for genetic diseases are poorly understood and policy and laws are needed. Caution is appropriate for earlier identification of neurodegenerative diseases but findings suggest further progress is safe, feasible and likely to advance clinical care. PMID:24036231
Diagnosis of varicoceles in men undergoing vasectomy may lead to earlier detection of hypogonadism.
Liu, Joceline S; Jones, Madeline; Casey, Jessica T; Fuchs, Amanda B; Cashy, John; Lin, William W
2014-06-01
To determine the temporal relationship between vasectomy, varicocele, and hypogonadism diagnosis. Many young men undergo their first thorough genitourinary examination in their adult lives at the time of vasectomy consultation, providing a unique opportunity for diagnosis of asymptomatic varicoceles. Varicoceles have recently been implicated as a possible reversible contributor to hypogonadism. Hypogonadism may be associated with significant adverse effect, including decreased libido, impaired cognitive function, and increased cardiovascular events. Early diagnosis and treatment of hypogonadism may prevent these adverse sequelae. Data were collected from the Truven Health Analytics MarketScan database, a large outpatient claims database. We reviewed records between 2003 and 2010 for male patients between the ages of 25 and 50 years with International Classification of Diseases, Ninth Revision codes for hypogonadism, vasectomy, and varicocele, and queried dates of first claim. A total of 15,679 men undergoing vasectomies were matched with 156,790 men with nonvasectomy claims in the same year. Vasectomy patients were diagnosed with varicocele at an earlier age (40.9 vs 42.5 years; P=.009). We identified 224,817 men between the ages of 25 and 50 years with a claim of hypogonadism, of which 5883 (2.6%) also had a claim of varicocele. Men with hypogonadism alone were older at presentation compared with men with an accompanying varicocele (41.3 [standard deviation±6.5] vs 34.9 [standard deviation±6.1]; P<.001). Men undergoing vasectomies are diagnosed with varicoceles at a younger age than age-matched controls. Men with varicoceles present with hypogonadism earlier than men without varicoceles. Earlier diagnosis of varicocele at the time of vasectomy allows for earlier detection of hypogonadism. Copyright © 2014 Elsevier Inc. All rights reserved.
Floodplains within reservoirs promote earlier spawning of white crappies Pomoxis annularis
Miranda, Leandro E.; Dagel, Jonah D.; Kaczka, Levi J.; Mower, Ethan; Wigen, S. L.
2015-01-01
Reservoirs impounded over floodplain rivers are unique because they may include within their upper reaches extensive shallow water stored over preexistent floodplains. Because of their relatively flat topography and riverine origin, floodplains in the upper reaches of reservoirs provide broad expanses of vegetation within a narrow range of reservoir water levels. Elsewhere in the reservoir, topography creates a band of shallow water along the contour of the reservoir where vegetation often does not grow. Thus, as water levels rise, floodplains may be the first vegetated habitats inundated within the reservoir. We hypothesized that shallow water in reservoir floodplains would attract spawning white crappies Pomoxis annularis earlier than reservoir embayments. Crappie relative abundance over five years in floodplains and embayments of four reservoirs increased as spawning season approached, peaked, and decreased as fish exited shallow water. Relative abundance peaked earlier in floodplains than embayments, and the difference was magnified with higher water levels. Early access to suitable spawning habitat promotes earlier spawning and may increase population fitness. Recognition of the importance of reservoir floodplains, an understanding of how reservoir water levels can be managed to provide timely connectivity to floodplains, and conservation of reservoir floodplains may be focal points of environmental management in reservoirs.
NASA Astrophysics Data System (ADS)
Parsaee, Zohreh; Mohammadi, Khosro
2017-06-01
Some new macrocyclic bridged dianilines tetradentate with N4coordination sphere Schiff base ligands and their nickel(II)complexes with general formula [{Ni2LCl4} where L = (C20H14N2X)2, X = SO2, O, CH2] have been synthesized. The compounds have been characterized by FT-IR, 1H and 13C NMR, mass spectroscopy, TGA, elemental analysis, molar conductivity and magnetic moment techniques. Scanning electron microscopy (SEM) shows nano-sized structures under 100 nm for nickel (II) complexes. NiO nanoparticle was achieved via the thermal decomposition method and analyzed by FT-IR, SEM and X-ray powder diffraction which indicates closeaccordance to standard pattern of NiO nanoparticle. All the Schiff bases and their complexes have been detected in vitro both for antibacterial activity against two gram-negative and two gram-positive bacteria. The nickel(II) complexes were found to be more active than the free macrocycle Schiff bases. In addition, computational studies of three ligands have been carried out at the DFT-B3LYP/6-31G+(d,p) level of theory on the spectroscopic properties, including IR, 1HNMR and 13CNMR spectroscopy. The correlation between the theoretical and the experimental vibrational frequencies, 1H NMR and 13C NMR of the ligands were 0.999, 0.930-0.973 and 0.917-0.995, respectively. Also, the energy gap was determined and by using HOMO and LUMO energy values, chemical hardness-softness, electronegativity and electrophilic index were calculated.
NASA Astrophysics Data System (ADS)
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian; Rauhut, Guntram
2015-12-01
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
Meier, Patrick; Oschetzki, Dominik; Pfeiffer, Florian
Resonating vibrational states cannot consistently be described by single-reference vibrational self-consistent field methods but request the use of multiconfigurational approaches. Strategies are presented to accelerate vibrational multiconfiguration self-consistent field theory and subsequent multireference configuration interaction calculations in order to allow for routine calculations at this enhanced level of theory. State-averaged vibrational complete active space self-consistent field calculations using mode-specific and state-tailored active spaces were found to be very fast and superior to state-specific calculations or calculations with a uniform active space. Benchmark calculations are presented for trans-diazene and bromoform, which show strong resonances in their vibrational spectra.
NASA Astrophysics Data System (ADS)
Nomura, Kazuya; Hoshino, Ryota; Hoshiba, Yasuhiro; Danilov, Victor I.; Kurita, Noriyuki
2013-04-01
We investigated transition states (TS) between wobble Guanine-Thymine (wG-T) and tautomeric G-T base-pair as well as Br-containing base-pairs by MP2 and density functional theory (DFT) calculations. The obtained TS between wG-T and G*-T (asterisk is an enol-form of base) is different from TS got by the previous DFT calculation. The activation energy (17.9 kcal/mol) evaluated by our calculation is significantly smaller than that (39.21 kcal/mol) obtained by the previous calculation, indicating that our TS is more preferable. In contrast, the obtained TS and activation energy between wG-T and G-T* are similar to those obtained by the previous DFT calculation. We furthermore found that the activation energy between wG-BrU and tautomeric G-BrU is smaller than that between wG-T and tautomeric G-T. This result elucidates that the replacement of CH3 group of T by Br increases the probability of the transition reaction producing the enol-form G* and T* bases. Because G* prefers to bind to T rather than to C, and T* to G not A, our calculated results reveal that the spontaneous mutation from C to T or from A to G base is accelerated by the introduction of wG-BrU base-pair.
National Institute of Standards and Technology Data Gateway
SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.
ERIC Educational Resources Information Center
Chenery, Gordon
1991-01-01
Uses chaos theory to investigate the nonlinear phenomenon of population growth fluctuation. Illustrates the use of computers and computer programs to make calculations in a nonlinear difference equation system. (MDH)
Ding, Y. H.; Hu, S. X.
Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less
Calculation of the overlap factor for scanning LiDAR based on the tridimensional ray-tracing method.
Chen, Ruiqiang; Jiang, Yuesong; Wen, Luhong; Wen, Donghai
2017-06-01
The overlap factor is used to evaluate the LiDAR light collection ability. Ranging LiDAR is mainly determined by the optical configuration. However, scanning LiDAR, equipped with a scanning mechanism to acquire a 3D coordinate points cloud for a specified target, is essential in considering the scanning effect at the same time. Otherwise, scanning LiDAR will reduce the light collection ability and even cannot receive any echo. From this point of view, we propose a scanning LiDAR overlap factor calculation method based on the tridimensional ray-tracing method, which can be applied to scanning LiDAR with any special laser intensity distribution, any type of telescope (reflector, refractor, or mixed), and any shape obstruction (i.e., the reflector of a coaxial optical system). A case study for our LiDAR with a scanning mirror is carried out, and a MATLAB program is written to analyze the laser emission and reception process. Sensitivity analysis is carried out as a function of scanning mirror rotation speed and detector position, and the results guide how to optimize the overlap factor for our LiDAR. The results of this research will have a guiding significance in scanning LiDAR design and assembly.
NASA Astrophysics Data System (ADS)
Li, Sen; Zhong, Zhong
2014-02-01
An improved flux-gradient relationship between momentum φm(ζ) and sensible heat φh(ζ) is obtained by the use of the observational data over an alpine meadow in the eastern Tibet Plateau, in Maqu of China during the period June to August, 2010. The empirical coefficients of Businger—Dyer type function for the cases of unstable and stable stratification are modified. Non-dimensional vertical gradients of wind and potential temperature are calculated by three fitting functions; that is, the log—linear, log—square, and log—cubic functions, respectively. It is found that the von Karman constant approaches 0.4025 and the Prandtl number is about 1.10 based on the measurements in near-neutral conditions, which are within reasonable range proposed in previous studies. The revised flux-gradient profile functions of -1/5 power law for momentum and -1/3 power law for sensible heat are best fitted in unstable stratification conditions. Meanwhile, 2/5 power law, instead of linear functions, is more appropriate in stable stratification cases for momentum and sensible heat. Compared with results from previous studies in which traditional functions are used, the momentum and sensible heat fluxes estimated by the revised profile functions in the current study are much closer to the observations for the unstable and stable stratification conditions.
Balakrishnan, C; Subha, L; Neelakantan, M A; Mariappan, S S
2015-11-05
A propargyl arms containing Schiff base (L) was synthesized by the condensation of 1-[2-hydroxy-4-(prop-2-yn-1-yloxy)phenyl]ethanone with trans-1,2-diaminocyclohexane. The structure of L was characterized by IR, (1)H NMR, (13)C NMR and UV-Vis spectroscopy and by single crystal X-ray diffraction analysis. The UV-Visible spectral behavior of L in different solvents exhibits positive solvatochromism. Density functional calculation of the L in gas phase was performed by using DFT (B3LYP) method with 6-31G basis set. The computed vibrational frequencies and NMR signals of L were compared with the experimental data. Tautomeric stability study inferred that the enolimine is more stable than the ketoamine form. The charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Electronic absorption and emission spectral studies were used to study the binding of L with CT-DNA. The molecular docking was done to identify the interaction of L with A-DNA and B-DNA. Copyright © 2015 Elsevier B.V. All rights reserved.
Ding, Y. H.; Hu, S. X.
2017-06-06
Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
NASA Astrophysics Data System (ADS)
Partovi-Azar, P.; Panahian Jand, S.; Kaghazchi, P.
2018-01-01
Edge termination of graphene nanoribbons is a key factor in determination of their physical and chemical properties. Here, we focus on nitrogen-terminated zigzag graphene nanoribbons resembling polyacrylonitrile-based carbon nanofibers (CNFs) which are widely studied in energy research. In particular, we investigate magnetic, electronic, and transport properties of these CNFs as functions of their widths using density-functional theory calculations together with the nonequilibrium Green's function method. We report on metallic behavior of all the CNFs considered in this study and demonstrate that the narrow CNFs show finite magnetic moments. The spin-polarized electronic states in these fibers exhibit similar spin configurations on both edges and result in spin-dependent transport channels in the narrow CNFs. We show that the partially filled nitrogen dangling-bond bands are mainly responsible for the ferromagnetic spin ordering in the narrow samples. However, the magnetic moment becomes vanishingly small in the case of wide CNFs where the dangling-bond bands fall below the Fermi level and graphenelike transport properties arising from the π orbitals are recovered. The magnetic properties of the CNFs as well as their stability have also been discussed in the presence of water molecules and the hexagonal boron nitride substrate.
2011-01-01
We report a reparameterization of the glycosidic torsion χ of the Cornell et al. AMBER force field for RNA, χOL. The parameters remove destabilization of the anti region found in the ff99 force field and thus prevent formation of spurious ladder-like structural distortions in RNA simulations. They also improve the description of the syn region and the syn–anti balance as well as enhance MD simulations of various RNA structures. Although χOL can be combined with both ff99 and ff99bsc0, we recommend the latter. We do not recommend using χOL for B-DNA because it does not improve upon ff99bsc0 for canonical structures. However, it might be useful in simulations of DNA molecules containing syn nucleotides. Our parametrization is based on high-level QM calculations and differs from conventional parametrization approaches in that it incorporates some previously neglected solvation-related effects (which appear to be essential for obtaining correct anti/high-anti balance). Our χOL force field is compared with several previous glycosidic torsion parametrizations. PMID:21921995
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Wang, Lichun; Cardenas, M Bayani
2015-08-01
The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.
Latham, Andrew J.; Patston, Lucy L. M.; Westermann, Christine; Kirk, Ian J.; Tippett, Lynette J.
2013-01-01
Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction. PMID:24058667
Latham, Andrew J; Patston, Lucy L M; Westermann, Christine; Kirk, Ian J; Tippett, Lynette J
2013-01-01
Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.
Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D
2014-08-01
Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
NASA Astrophysics Data System (ADS)
Wang, Lilie; Ding, George X.
2014-07-01
The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.
Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Takada, Kenta; Kumada, Hiroaki; Liem, Peng Hong; Sakurai, Hideyuki; Sakae, Takeji
2016-12-01
We simulated the effect of patient displacement on organ doses in boron neutron capture therapy (BNCT). In addition, we developed a faster calculation algorithm (NCT high-speed) to simulate irradiation more efficiently. We simulated dose evaluation for the standard irradiation position (reference position) using a head phantom. Cases were assumed where the patient body is shifted in lateral directions compared to the reference position, as well as in the direction away from the irradiation aperture. For three groups of neutron (thermal, epithermal, and fast), flux distribution using NCT high-speed with a voxelized homogeneous phantom was calculated. The three groups of neutron fluxes were calculated for the same conditions with Monte Carlo code. These calculated results were compared. In the evaluations of body movements, there were no significant differences even with shifting up to 9mm in the lateral directions. However, the dose decreased by about 10% with shifts of 9mm in a direction away from the irradiation aperture. When comparing both calculations in the phantom surface up to 3cm, the maximum differences between the fluxes calculated by NCT high-speed with those calculated by Monte Carlo code for thermal neutrons and epithermal neutrons were 10% and 18%, respectively. The time required for NCT high-speed code was about 1/10th compared to Monte Carlo calculation. In the evaluation, the longitudinal displacement has a considerable effect on the organ doses. We also achieved faster calculation of depth distribution of thermal neutron flux using NCT high-speed calculation code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Chang, Sanders; Sigel, Keith; Goldstein, Nathan E; Wisnivesky, Juan; Dharmarajan, Kavita V
2018-06-06
The American Society of Clinical Oncology recommends that all patients with metastatic disease receive dedicated palliative care (PC) services early in their illness, ideally via interdisciplinary care teams. We investigated the time trends of specialty palliative care consultations from the date of metastatic cancer diagnosis among patients receiving palliative radiation therapy (PRT). A shorter time interval between metastatic diagnosis and first PC consultation suggests earlier involvement of palliative care in a patient's life with metastatic cancer. In this IRB-approved retrospective analysis, patients treated with PRT for solid tumors (bone and brain) at a single tertiary care hospital between 2010 and 2016 were included. Cohorts were arbitrarily established by metastatic diagnosis within approximately two-year intervals: (1) 1/1/2010-3/27/2012; (2) 3/28/2012-5/21/2014; and (3) 5/22/2014-12/31/2016. Cox-proportional hazards regression modelling was used to compare trends of PC consultation among cohorts. Of 284 patients identified, 184 patients received PC consultation, whereas 15 patients died before receiving a PC consult. Median follow-up time until an event or censor was 257 days (range: 1,900). Patients in the most recent cohort had a shorter median time to first PC consult (57 days) compared to those in the first (374 days) and second (186 days) cohorts. On multivariable analysis, patients in the third cohort were more likely to undergo a PC consultation earlier in their metastatic illness (HR: 1.8, 95% CI: 1.2,2.8). Over a six-year period, palliative care consultation occurred earlier for metastatic patients treated with PRT at our institution. Copyright © 2018. Published by Elsevier Inc.
Earlier Pulmonary Valve Replacement in Down Syndrome Patients Following Tetralogy of Fallot Repair.
Sullivan, Rachel T; Frommelt, Peter C; Hill, Garick D
2017-08-01
The association between Down syndrome and pulmonary hypertension could contribute to more severe pulmonary regurgitation after tetralogy of Fallot repair and possibly earlier pulmonary valve replacement. We compared cardiac magnetic resonance measures of pulmonary regurgitation and right ventricular dilation as well as timing of pulmonary valve replacement between those with and without Down syndrome after tetralogy of Fallot repair. Review of our surgical database from 2000 to 2015 identified patients with tetralogy of Fallot with pulmonary stenosis. Those with Down syndrome were compared to those without. The primary outcome of interest was time from repair to pulmonary valve replacement. Secondary outcomes included pulmonary regurgitation and indexed right ventricular volume on cardiac magnetic resonance imaging. The cohort of 284 patients included 35 (12%) with Down syndrome. Transannular patch repair was performed in 210 (74%). Down syndrome showed greater degree of pulmonary regurgitation (55 ± 14 vs. 37 ± 16%, p = 0.01) without a significantly greater rate of right ventricular dilation (p = 0.09). In multivariable analysis, Down syndrome (HR 2.3, 95% CI 1.2-4.5, p = 0.02) and transannular patch repair (HR 5.5, 95% CI 1.7-17.6, p = 0.004) were significant risk factors for valve replacement. Those with Down syndrome had significantly lower freedom from valve replacement (p = 0.03). Down syndrome is associated with an increased degree of pulmonary regurgitation and earlier pulmonary valve replacement after tetralogy of Fallot repair. These patients require earlier assessment by cardiac magnetic resonance imaging to determine timing of pulmonary valve replacement and evaluation for and treatment of preventable causes of pulmonary hypertension.
Insight Into Illness and Cognition in Schizophrenia in Earlier and Later Life.
Gerretsen, Philip; Voineskos, Aristotle N; Graff-Guerrero, Ariel; Menon, Mahesh; Pollock, Bruce G; Mamo, David C; Mulsant, Benoit H; Rajji, Tarek K
2017-04-01
Impaired insight into illness in schizophrenia is associated with illness severity and deficits in premorbid intellectual function, executive function, and memory. A previous study of patients aged 60 years and older found that illness severity and premorbid intellectual function accounted for variance in insight impairment. As such, we aimed to test whether similar relationships would be observed in earlier life. A retrospective analysis was performed on 1 large sample of participants (n = 171) with a DSM-IV-TR diagnosis of schizophrenia aged 19 to 79 years acquired from 2 studies: (1) a psychosocial intervention trial for older persons with schizophrenia (June 2008 to May 2014) and (2) a diffusion tensor imaging and genetics study of psychosis across the life span (February 2007 to December 2013). We assessed insight into illness using the Positive and Negative Syndrome Scale (PANSS) item G12 and explored its relationship to illness severity (PANSS total modified), premorbid intellectual function (Wechsler Test of Adult Reading [WTAR]), and cognition. Insight impairment was more severe in later life (≥ 60 years) than in earlier years (t = -3.75, P < .001). Across the whole sample, the variance of impaired insight was explained by PANSS total modified (Exp[B] = 1.070, P < .001) and WTAR scores (Exp[B] = 0.970, P = .028). Although age and cognition were correlated with impaired insight, they did not independently contribute to its variance. However, the relationships between impaired insight and illness severity and between impaired insight and cognition, particularly working memory, were stronger in later life than in earlier life. These results suggest an opportunity for intervention may exist with cognitive-enhancing neurostimulation or medications to improve insight into illness in schizophrenia across the life span. Original study registered on ClinicalTrials.gov (identifier: NCT00832845). © Copyright 2017 Physicians Postgraduate Press, Inc.
Gullón, Alejandra; Verdejo, José; de Miguel, Rosa; Gómez, Ana; Sanz, Jesús
2016-10-01
Late diagnosis (LD) of human immunodeficiency virus (HIV) infection continues to be a significant problem that increases disease burden both for patients and for the public health system. Guidelines have been updated in order to facilitate earlier HIV diagnosis, introducing "indicator condition-guided HIV testing". In this study, we analysed the frequency of LD and associated risk factors. We retrospectively identified those cases that could be considered missed opportunities for an earlier diagnosis. All patients newly diagnosed with HIV infection who attended Hospital La Princesa, Madrid (Spain) between 2007 and 2014 were analysed. We collected epidemiological, clinical and immunological data. We also reviewed electronic medical records from the year before the HIV diagnosis to search for medical consultations due to clinical indicators. HIV infection was diagnosed in 354 patients. The median CD4 count at presentation was 352 cells/mm(3). Overall, 158 patients (50%) met the definition of LD, and 97 (30.7%) the diagnosis of advanced disease. LD was associated with older age and was more frequent amongst immigrants. Heterosexual relations and injection drug use were more likely to be the reasons for LD than relations between men who have sex with men. During the year preceding the diagnosis, 46.6% of the patients had sought medical advice owing to the presence of clinical indicators that should have led to HIV testing. Of those, 24 cases (14.5%) were classified as missed opportunities for earlier HIV diagnosis because testing was not performed. According to these results, all health workers should pursue early HIV diagnosis through the proper implementation of HIV testing guidelines. Such an approach would prove directly beneficial to the patient and indirectly beneficial to the general population through the reduction in the risk of transmission.
Earlier time to aerobic exercise is associated with faster recovery following acute sport concussion
Richards, Doug; Comper, Paul; Hutchison, Michael G.
2018-01-01
Objective To determine whether earlier time to initiation of aerobic exercise following acute concussion is associated with time to full return to (1) sport and (2) school or work. Methods A retrospective stratified propensity score survival analysis of acute (≤14 days) concussion was used to determine whether time (days) to initiation of aerobic exercise post-concussion was associated with, both, time (days) to full return to (1) sport and (2) school or work. Results A total of 253 acute concussions [median (IQR) age, 17.0 (15.0–20.0) years; 148 (58.5%) males] were included in this study. Multivariate Cox regression models identified that earlier time to aerobic exercise was associated with faster return to sport and school/work adjusting for other covariates, including quintile