Simulation and analysis of main steam control system based on heat transfer calculation
NASA Astrophysics Data System (ADS)
Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai
2018-05-01
In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
NASA Astrophysics Data System (ADS)
Wisniewski, H.; Gourdain, P.-A.
2017-10-01
APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
Nalichowski, Adrian; Burmeister, Jay
2013-07-01
To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) < 1 of the GPU plan resulted in over 90% of calculated voxels satisfying Γ < 1 criterion as compared to baseline CPU plan. The average number of voxels meeting the Γ < 1 criterion for all the plans was 97%. In terms of dose optimization/calculation efficiency, there was a 20-fold reduction in planning time with the new GPU system. The average optimization/dose calculation time utilizing the traditional CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Preliminary calculations related to the accident at Three Mile Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchner, W.L.; Stevenson, M.G.
This report discusses preliminary studies of the Three Mile Island Unit 2 (TMI-2) accident based on available methods and data. The work reported includes: (1) a TRAC base case calculation out to 3 hours into the accident sequence; (2) TRAC parametric calculations, these are the same as the base case except for a single hypothetical change in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident; (3) fuel rod cladding failure, cladding oxidation due to zirconium metal-steam reactions, hydrogen release due to cladding oxidation, cladding ballooning, cladding embrittlement,more » and subsequent cladding breakup estimates based on TRAC calculated cladding temperatures and system pressures. Some conclusions of this work are: the TRAC base case accident calculation agrees very well with known system conditions to nearly 3 hours into the accident; the parametric calculations indicate that, loss-of-core cooling was most influenced by the throttling of High-Pressure Injection (HPI) flows, given the accident initiating events and the pressurizer electromagnetic-operated valve (EMOV) failing to close as designed; failure of nearly all the rods and gaseous fission product gas release from the failed rods is predicted to have occurred at about 2 hours and 30 minutes; cladding oxidation (zirconium-steam reaction) up to 3 hours resulted in the production of approximately 40 kilograms of hydrogen.« less
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...
Development of a web-based CT dose calculator: WAZA-ARI.
Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M
2011-09-01
A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.
Creative Uses for Calculator-based Laboratory (CBL) Technology in Chemistry.
ERIC Educational Resources Information Center
Sales, Cynthia L.; Ragan, Nicole M.; Murphy, Maureen Kendrick
1999-01-01
Reviews three projects that use a graphing calculator linked to a calculator-based laboratory device as a portable data-collection system for students in chemistry classes. Projects include Isolation, Purification and Quantification of Buckminsterfullerene from Woodstove Ashes; Determination of the Activation Energy Associated with the…
Programmable calculator as a data system controller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barth, A.W.; Strasburg, A.C.
Digital data techniques are in common use for analysis of analog information obtained in various tests, and systems have been developed which use a minicomputer as the central controller and data processor. Now, microprocessors allow new design approaches at considerably less cost. This report outlines an approach to system design based on the use of a programmable calculator as the data system controller. A block diagram of the calculator-controlled data system is shown. It was found that the programmable calculator provides a viable alternative to minicomputers or microprocessors for the development laboratory requiring digital data processing. 3 figures. (RWR)
Tree value system: users guide.
J.K. Ayer Sachet; D.G. Briggs; R.D. Fight
1989-01-01
This paper instructs resource analysts on use of the Tree Value System (TREEVAL). TREEVAL is a microcomputer system of programs for calculating tree or stand values and volumes based on predicted product recovery. Designed for analyzing silvicultural decisions, the system can also be used for appraisals and for evaluating log bucking. The system calculates results...
Claimed Versus Calculated Cue-Weighting Systems for Screening Employee Applicants
ERIC Educational Resources Information Center
Blevins, David E.
1975-01-01
This research compares the cue-weighting system which assessors claimed they used with the cue-weighting system one would infer they used based on multiple observations of their assessing behavior. The claimed cue-weighting systems agreed poorly with the empirically calculated cue-weighting systems for all assessors except one who utilized only…
Counter sniper: a localization system based on dual thermal imager
NASA Astrophysics Data System (ADS)
He, Yuqing; Liu, Feihu; Wu, Zheng; Jin, Weiqi; Du, Benfang
2010-11-01
Sniper tactics is widely used in modern warfare, which puts forward the urgent requirement of counter sniper detection devices. This paper proposed the anti-sniper detection system based on a dual-thermal imaging system. Combining the infrared characteristics of the muzzle flash and bullet trajectory of binocular infrared images obtained by the dual-infrared imaging system, the exact location of the sniper was analyzed and calculated. This paper mainly focuses on the system design method, which includes the structure and parameter selection. It also analyzes the exact location calculation method based on the binocular stereo vision and image analysis, and give the fusion result as the sniper's position.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
NASA Astrophysics Data System (ADS)
Katayama-Yoshida, Hiroshi; Nakanishi, Akitaka; Uede, Hiroki; Takawashi, Yuki; Fukushima, Tetsuya; Sato, Kazunori
2014-03-01
Based upon ab initio electronic structure calculation, I will discuss the general rule of negative effective U system by (1) exchange-correlation-induced negative effective U caused by the stability of the exchange-correlation energy in Hund's rule with high-spin ground states of d5 configuration, and (2) charge-excitation-induced negative effective U caused by the stability of chemical bond in the closed-shell of s2, p6, and d10 configurations. I will show the calculated results of negative effective U systems such as hole-doped CuAlO2 and CuFeS2. Based on the total energy calculations of antiferromagnetic and ferromagnetic states, I will discuss the magnetic phase diagram and superconductivity upon hole doping. I also discuss the computational materials design method of high-Tc superconductors by ab initio calculation to go beyond LDA and multi-scale simulations.
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem
NASA Astrophysics Data System (ADS)
Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.
2016-04-01
This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu
2011-04-01
A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
NASA Astrophysics Data System (ADS)
Ulutas, E.; Inan, A.; Annunziato, A.
2012-06-01
This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS) in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1) pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2) near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3) post-event tsunami calculations using GCMT (Global Centroid Moment Tensor) fault mechanism solutions proposed by US Geological Survey (USGS) for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30-40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died. The post-event calculations indicated medium-high humanitarian impacts.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
DEVELOPMENT OF A WATERSHED-BASED MERCURY POLLUTION CHARACTERIZATION SYSTEM
To investigate total mercury loadings to streams in a watershed, we have developed a watershed-based source quantification model ? Watershed Mercury Characterization System. The system uses the grid-based GIS modeling technology to calculate total soil mercury concentrations and ...
Estimation of PV energy production based on satellite data
NASA Astrophysics Data System (ADS)
Mazurek, G.
2015-09-01
Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.
Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating
NASA Technical Reports Server (NTRS)
Pickney, John T.; Milliken, Andrew H.
2010-01-01
An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours
Project Echo: System Calculations
NASA Technical Reports Server (NTRS)
Ruthroff, Clyde L.; Jakes, William C., Jr.
1961-01-01
The primary experimental objective of Project Echo was the transmission of radio communications between points on the earth by reflection from the balloon satellite. This paper describes system calculations made in preparation for the experiment and their adaptation to the problem of interpreting the results. The calculations include path loss computations, expected audio signal-to-noise ratios, and received signal strength based on orbital parameters.
NASA Astrophysics Data System (ADS)
Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.
2018-01-01
A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.
NASA Astrophysics Data System (ADS)
Valdman, V. V.; Gridnev, S. O.
2017-10-01
The article examines into the vital issues of measuring and calculating the raw stock volumes in covered storehouses at mining and processing plants. The authors bring out two state-of-the-art high-technology solutions: 1 - to use the ground-based laser scanning system (the method is reasonably accurate and dependable, but costly and time consuming; it also requires the stoppage of works in the storehouse); 2 - to use the fundamentally new computerized stocktaking system in mine surveying for the ore mineral volume calculation, based on the profile digital images. These images are obtained via vertical projection of the laser plane onto the surface of the stored raw materials.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
Ertl, P
1998-02-01
Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.
Neural computing thermal comfort index PMV for the indoor environment intelligent control system
NASA Astrophysics Data System (ADS)
Liu, Chang; Chen, Yifei
2013-03-01
Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.
High-precision positioning system of four-quadrant detector based on the database query
NASA Astrophysics Data System (ADS)
Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang
2015-02-01
The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.
Professional Growth & Support Spending Calculator
ERIC Educational Resources Information Center
Education Resource Strategies, 2013
2013-01-01
This "Professional Growth & Support Spending Calculator" helps school systems quantify all current spending aimed at improving teaching effectiveness. Part I provides worksheets to analyze total investment. Part II provides a system for evaluating investments based on purpose, target group, and delivery. In this Spending Calculator…
Cellular-based preemption system
NASA Technical Reports Server (NTRS)
Bachelder, Aaron D. (Inventor)
2011-01-01
A cellular-based preemption system that uses existing cellular infrastructure to transmit preemption related data to allow safe passage of emergency vehicles through one or more intersections. A cellular unit in an emergency vehicle is used to generate position reports that are transmitted to the one or more intersections during an emergency response. Based on this position data, the one or more intersections calculate an estimated time of arrival (ETA) of the emergency vehicle, and transmit preemption commands to traffic signals at the intersections based on the calculated ETA. Additional techniques may be used for refining the position reports, ETA calculations, and the like. Such techniques include, without limitation, statistical preemption, map-matching, dead-reckoning, augmented navigation, and/or preemption optimization techniques, all of which are described in further detail in the above-referenced patent applications.
Comparative PV LCOE calculator | Photovoltaic Research | NREL
Use the Comparative Photovoltaic Levelized Cost of Energy Calculator (Comparative PV LCOE Calculator) to calculate levelized cost of energy (LCOE) for photovoltaic (PV) systems based on cost effect on LCOE to determine whether a proposed technology is cost-effective, perform trade-off analysis
Proposed software system for atomic-structure calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, C.F.
1981-07-01
Atomic structure calculations are understood well enough that, at a routine level, an atomic structure software package can be developed. At the Atomic Physics Conference in Riga, 1978 L.V. Chernysheva and M.Y. Amusia of Leningrad University, presented a paper on Software for Atomic Calculations. Their system, called ATOM is based on the Hartree-Fock approximation and correlation is included within the framework of RPAE. Energy level calculations, transition probabilities, photo-ionization cross-sections, electron scattering cross-sections are some of the physical properties that can be evaluated by their system. The MCHF method, together with CI techniques and the Breit-Pauli approximation also provides amore » sound theoretical basis for atomic structure calculations.« less
Lahham, Adnan; Alkbash, Jehad Abu; ALMasri, Hussien
2017-04-20
Theoretical assessments of power density in far-field conditions were used to evaluate the levels of environmental electromagnetic frequencies from selected GSM900 macrocell base stations in the West Bank and Gaza Strip. Assessments were based on calculating the power densities using commercially available software (RF-Map from Telstra Research Laboratories-Australia). Calculations were carried out for single base stations with multiantenna systems and also for multiple base stations with multiantenna systems at 1.7 m above the ground level. More than 100 power density levels were calculated at different locations around the investigated base stations. These locations include areas accessible to the general public (schools, parks, residential areas, streets and areas around kindergartens). The maximum calculated electromagnetic emission level resulted from a single site was 0.413 μW cm-2 and found at Hizma town near Jerusalem. Average maximum power density from all single sites was 0.16 μW cm-2. The results of all calculated power density levels in 100 locations distributed over the West Bank and Gaza were nearly normally distributed with a peak value of ~0.01% of the International Commission on Non-Ionizing Radiation Protection's limit recommended for general public. Comparison between calculated and experimentally measured value of maximum power density from a base station showed that calculations overestimate the actual measured power density by ~27%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development of a Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.
Modulation transfer function of a fish-eye lens based on the sixth-order wave aberration theory.
Jia, Han; Lu, Lijun; Cao, Yiqing
2018-01-10
A calculation program of the modulation transfer function (MTF) of a fish-eye lens is developed with the autocorrelation method, in which the sixth-order wave aberration theory of ultra-wide-angle optical systems is used to simulate the wave aberration distribution at the exit pupil of the optical systems. The autocorrelation integral is processed with the Gauss-Legendre integral, and the magnification chromatic aberration is discussed to calculate polychromatic MTF. The MTF calculation results of a given example are then compared with those previously obtained based on the fourth-order wave aberration theory of plane-symmetrical optical systems and with those from the Zemax program. The study shows that MTF based on the sixth-order wave aberration theory has satisfactory calculation accuracy even for a fish-eye lens with a large acceptance aperture. And the impacts of different types of aberrations on the MTF of a fish-eye lens are analyzed. Finally, we apply the self-adaptive and normalized real-coded genetic algorithm and the MTF developed in the paper to optimize the Nikon F/2.8 fish-eye lens; consequently, the optimized system shows better MTF performances than those of the original design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen
Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less
NASA Astrophysics Data System (ADS)
Seo, Won-Gap; Matsuura, Hiroyuki; Tsukihashi, Fumitaka
2006-04-01
Recently, molecular dynamics (MD) simulation has been widely employed as a very useful method for the calculation of various physicochemical properties in the molten slags and fluxes. In this study, MD simulation has been applied to calculate the structural, transport, and thermodynamic properties for the FeCl2, PbCl2, and ZnCl2 systems using the Born—Mayer—Huggins type pairwise potential with partial ionic charges. The interatomic potential parameters were determined by fitting the physicochemical properties of iron chloride, lead chloride, and zinc chloride systems with experimentally measured results. The calculated structural, transport, and thermodynamic properties of pure FeCl2, PbCl2, and ZnCl2 showed the same tendency with observed results. Especially, the calculated structural properties of molten ZnCl2 and FeCl2 show the possibility of formation of polymeric network structures based on the ionic complexes of ZnCl{4/2-}, ZnCl{3/-}, FeCl{4/2-}, and FeCl{3/-}, and these calculations have successfully reproduced the measured results. The enthalpy, entropy, and Gibbs energy of mixing for the PbCl2-ZnCl2, FeCl2-PbCl2, and FeCl2-ZnCl2 systems were calculated based on the thermodynamic and structural parameters of each binary system obtained from MD simulation. The phase diagrams of the PbCl2-ZnCl2, FeCl2-PbCl2, and FeCl2-ZnCl2 systems estimated by using the calculated Gibbs energy of mixing reproduced the experimentally measured ones reasonably well.
Patel, Ronak Y; Shah, Neethu; Jackson, Andrew R; Ghosh, Rajarshi; Pawliczek, Piotr; Paithankar, Sameer; Baker, Aaron; Riehle, Kevin; Chen, Hailin; Milosavljevic, Sofia; Bizon, Chris; Rynearson, Shawn; Nelson, Tristan; Jarvik, Gail P; Rehm, Heidi L; Harrison, Steven M; Azzariti, Danielle; Powell, Bradford; Babb, Larry; Plon, Sharon E; Milosavljevic, Aleksandar
2017-01-12
The success of the clinical use of sequencing based tests (from single gene to genomes) depends on the accuracy and consistency of variant interpretation. Aiming to improve the interpretation process through practice guidelines, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) have published standards and guidelines for the interpretation of sequence variants. However, manual application of the guidelines is tedious and prone to human error. Web-based tools and software systems may not only address this problem but also document reasoning and supporting evidence, thus enabling transparency of evidence-based reasoning and resolution of discordant interpretations. In this report, we describe the design, implementation, and initial testing of the Clinical Genome Resource (ClinGen) Pathogenicity Calculator, a configurable system and web service for the assessment of pathogenicity of Mendelian germline sequence variants. The system allows users to enter the applicable ACMG/AMP-style evidence tags for a specific allele with links to supporting data for each tag and generate guideline-based pathogenicity assessment for the allele. Through automation and comprehensive documentation of evidence codes, the system facilitates more accurate application of the ACMG/AMP guidelines, improves standardization in variant classification, and facilitates collaborative resolution of discordances. The rules of reasoning are configurable with gene-specific or disease-specific guideline variations (e.g. cardiomyopathy-specific frequency thresholds and functional assays). The software is modular, equipped with robust application program interfaces (APIs), and available under a free open source license and as a cloud-hosted web service, thus facilitating both stand-alone use and integration with existing variant curation and interpretation systems. The Pathogenicity Calculator is accessible at http://calculator.clinicalgenome.org . By enabling evidence-based reasoning about the pathogenicity of genetic variants and by documenting supporting evidence, the Calculator contributes toward the creation of a knowledge commons and more accurate interpretation of sequence variants in research and clinical care.
NASA Astrophysics Data System (ADS)
Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur
2017-06-01
In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
Real-Time Aircraft Engine-Life Monitoring
NASA Technical Reports Server (NTRS)
Klein, Richard
2014-01-01
This project developed an inservice life-monitoring system capable of predicting the remaining component and system life of aircraft engines. The embedded system provides real-time, inflight monitoring of the engine's thrust, exhaust gas temperature, efficiency, and the speed and time of operation. Based upon this data, the life-estimation algorithm calculates the remaining life of the engine components and uses this data to predict the remaining life of the engine. The calculations are based on the statistical life distribution of the engine components and their relationship to load, speed, temperature, and time.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
The development of android - based children's nutritional status monitoring system
NASA Astrophysics Data System (ADS)
Suryanto, Agus; Paramita, Octavianti; Pribadi, Feddy Setio
2017-03-01
The calculation of BMI (Body Mass Index) is one of the methods to calculate the nutritional status of a person. The BMI calculation has not yet widely understood and known by the public. In addition, people should know the importance of progress in the development of child nutrition each month. Therefore, an application to determine the nutritional status of children based on Android was developed in this study. This study restricted the calculation for children with the age of 0-60 months. The application can run on a smartphone or tablet PC with android operating system due to the rapid development of a smartphone or tablet PC with android operating system and many people own and use it. The aim of this study was to produce a android app to calculate of nutritional status of children. This study was Research and Development (R & D), with a design approach using experimental studies. The steps in this study included analyzing the formula of the Body Mass Index (BMI) and developing the initial application with the help of a computer that includes the design and manufacture of display using Eclipse software. This study resulted in android application that can be used to calculate the nutritional status of children with the age 0-60 months. The results of MES or the error calculation analysis using body mass index formula was 0. In addition, the results of MAPE percentage was 0%. It shows that there is no error in the calculation of the application based on the BMI formula. The smaller value of MSE and MAPE leads to higher level of accuracy.
CELSS scenario analysis: Breakeven calculations
NASA Technical Reports Server (NTRS)
Mason, R. M.
1980-01-01
A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.
NASA Astrophysics Data System (ADS)
Wang, Lilie; Ding, George X.
2014-07-01
The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Greenhouse Gas Emissions Calculator for Grain and Biofuel Farming Systems
ERIC Educational Resources Information Center
McSwiney, Claire P.; Bohm, Sven; Grace, Peter R.; Robertson, G. Philip
2010-01-01
Opportunities for farmers to participate in greenhouse gas (GHG) credit markets require that growers, students, extension educators, offset aggregators, and other stakeholders understand the impact of agricultural practices on GHG emissions. The Farming Systems Greenhouse Gas Emissions Calculator, a web-based tool linked to the SOCRATES soil…
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
USDA-ARS?s Scientific Manuscript database
Objective: To develop and evaluate a method for calculating the Healthy Eating Index-2005 (HEI-2005) with the widely used Nutrition Data System for Research (NDSR) based on the method developed for use with the US Department of Agriculture’s (USDA) Food and Nutrient Dietary Data System (FNDDS) and M...
Three-dimensional assessment of scoliosis based on ultrasound data
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Li, Hongjian; Yu, Bo
2015-12-01
In this study, an approach was proposed to assess the 3D scoliotic deformity based on ultrasound data. The 3D spine model was reconstructed by using a freehand 3D ultrasound imaging system. The geometric torsion was then calculated from the reconstructed spine model. A thoracic spine phantom set at a given pose was used in the experiment. The geometric torsion of the spine phantom calculated from the freehand ultrasound imaging system was 0.041 mm-1 which was close to that calculated from the biplanar radiographs (0.025 mm-1). Therefore, ultrasound is a promising technique for the 3D assessment of scoliosis.
Sub-pixel accuracy thickness calculation of poultry fillets from scattered laser profiles
NASA Astrophysics Data System (ADS)
Jing, Hansong; Chen, Xin; Tao, Yang; Zhu, Bin; Jin, Fenghua
2005-11-01
A laser range imaging system based on the triangulation method was designed and implemented for online high-resolution thickness calculation of poultry fillets. A laser pattern was projected onto the surface of the chicken fillet for calculation of the thickness of the meat. Because chicken fillets are relatively loosely-structured material, a laser light easily penetrates the meat, and scattering occurs both at and under the surface. When laser light is scattered under the surface it is reflected back and further blurs the laser line sharpness. To accurately calculate the thickness of the object, the light transportation has to be considered. In the system, the Bidirectional Reflectance Distribution Function (BSSRDF) was used to model the light transportation and the light pattern reflected into the cameras. BSSRDF gives the reflectance of a target as a function of illumination geometry and viewing geometry. Based on this function, an empirical method has been developed and it has been proven that this method can be used to accurately calculate the thickness of the object from a scattered laser profile. The laser range system is designed as a sub-system that complements the X-ray bone inspection system for non-invasive detection of hazardous materials in boneless poultry meat with irregular thickness.
Calculator Programming Engages Visual and Kinesthetic Learners
ERIC Educational Resources Information Center
Tabor, Catherine
2014-01-01
Inclusion and differentiation--hallmarks of the current educational system--require a paradigm shift in the way that educators run their classrooms. This article enumerates the need for techno-kinesthetic, visually based activities and offers an example of a calculator-based programming activity that addresses that need. After discussing the use…
An X-ray fluorescence spectrometer and its applications in materials studies
NASA Technical Reports Server (NTRS)
Singh, J. J.; Han, K. S.
1977-01-01
An X-ray fluorescence system based on a Co(57) gamma-ray source has been developed. The system was used to calculate the atomic percentages of iron implanted in titanium targets. Measured intensities of Fe (k-alpha + k-beta) and Ti (k-alpha + k-beta) X-rays from the Fe-Ti targets are in good agreement with the calculated values based on photoelectric cross sections of Ti and Fe for the Co(57) gamma rays.
COMPUTER PROGRAM FOR CALCULATING THE COST OF DRINKING WATER TREATMENT SYSTEMS
This FORTRAN computer program calculates the construction and operation/maintenance costs for 45 centralized unit treatment processes for water supply. The calculated costs are based on various design parameters and raw water quality. These cost data are applicable to small size ...
NASA Astrophysics Data System (ADS)
Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.
2016-04-01
The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.
Introducing the Global Fire WEather Database (GFWED)
NASA Astrophysics Data System (ADS)
Field, R. D.
2015-12-01
The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/
A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes
2016-06-01
The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.
Boeker, Peter; Leppert, Jan; Mysliwietz, Bodo; Lammers, Peter Schulze
2013-10-01
The Deans' switch is an effluent switching device based on controlling flows of carrier gas instead of mechanical valves in the analytical flow path. This technique offers high inertness and a wear-free operation. Recently new monolithic microfluidic devices have become available. In these devices the whole flow system is integrated into a small metal device with low thermal mass and leak-tight connections. In contrast to a mechanical valve-based system, a flow-controlled system is more difficult to calculate. Usually the Deans' switch is used to switch one inlet to one of two outlets, by means of two auxiliary flows. However, the Deans' switch can also be used to deliver the GC effluent with a specific split ratio to both outlets. The calculation of the split ratio of the inlet flow to the two outlets is challenging because of the asymmetries of the flow resistances. This is especially the case, if one of the outlets is a vacuum device, such as a mass spectrometer, and the other an atmospheric detector, e.g. a flame ionization detector (FID) or an olfactory (sniffing) port. The capillary flows in gas chromatography are calculated with the Hagen-Poiseuille equation of the laminar, isothermal and compressible flow in circular tubes. The flow resistances in the new microfluidic devices have to be calculated with the corresponding equation for rectangular cross-section microchannels. The Hagen-Poiseuille equation underestimates the flow to a vacuum outlet. A corrected equation originating from the theory of rarefied flows is presented. The calculation of pressures and flows of a Deans' switch based chromatographic system is done by the solution of mass balances. A specific challenge is the consideration of the antidiffusion resistor between the two auxiliary gas lines of the Deans' switch. A full solution for the calculation of the Deans' switch including this restrictor is presented. Results from validation measurements are in good accordance with the developed theories. A spreadsheet-based flow calculator is part of the Supporting Information.
ERIC Educational Resources Information Center
Barber, Betsy; Ball, Rhonda
This project description is designed to show how graphing calculators and calculator-based laboratories (CBLs) can be used to explore topics in physics and health sciences. The activities address such topics as respiration, heart rate, and the circulatory system. Teaching notes and calculator instructions are included as are blackline masters. (MM)
Pocket calculator for local fire-danger ratings
Richard J. Barney; William C. Fischer
1967-01-01
In 1964, Stockstad and Barney published tables that provided conversion factors for calculating local fire danger in the Intermountain area according to fuel types, locations, steepness of terrain, aspects, and times of day. These tables were based on the National Fire-Danger Rating System published earlier that year. This system was adopted for operational use in...
NASA Astrophysics Data System (ADS)
Kwak, G.; Kim, K.; Park, Y.
2014-02-01
As the maritime boundary delimitation is important for the purpose of securing marine resources, in addition to the aspect of maritime security, interest in maritime boundary delimitation to help national benefits are increasing over the world. In Korea, the importance of maritime boundary delimitation with the neighbouring countries is also increasing in practice. The quantity of obtainable marine resources depending on maritime boundary acts as an important factor for maritime boundary delimitation. Accordingly, a study is required to calculate quantity of our obtainable marine resources depending on maritime boundary delimitation. This study intends to calculate obtainable marine resources depending on various maritime boundary scenarios insisted by several countries. It mainly aims at developing a GIS-based automation system to be utilized for decision making of the maritime boundary delimitation. For this target, it has designed a module using spatial analysis technique to automatically calculate profit and loss waters area of each country upon maritime boundary and another module to estimate economic profits and losses obtained by each country using the calculated waters area and pricing information of the marine resources. By linking both the designed modules, it has implemented an automatic economic profit and loss calculation system for the GIS-based maritime boundary delimitation. The system developed from this study automatically calculate quantity of the obtainable marine resources of a country for the maritime boundary to be added and created in the future. Thus, it is expected to support decision making for the maritime boundary negotiators.
Cost estimate of electricity produced by TPV
NASA Astrophysics Data System (ADS)
Palfinger, Günther; Bitnar, Bernd; Durisch, Wilhelm; Mayor, Jean-Claude; Grützmacher, Detlev; Gobrecht, Jens
2003-05-01
A crucial parameter for the market penetration of TPV is its electricity production cost. In this work a detailed cost estimate is performed for a Si photocell based TPV system, which was developed for electrically self-powered operation of a domestic heating system. The results are compared to a rough estimate of cost of electricity for a projected GaSb based system. For the calculation of the price of electricity, a lifetime of 20 years, an interest rate of 4.25% per year and maintenance costs of 1% of the investment are presumed. To determine the production cost of TPV systems with a power of 12-20 kW, the costs of the TPV components and 100 EUR kW-1el,peak for assembly and miscellaneous were estimated. Alternatively, the system cost for the GaSb system was derived from the cost of the photocells and from the assumption that they account for 35% of the total system cost. The calculation was done for four different TPV scenarios which include a Si based prototype system with existing technology (etasys = 1.0%), leading to 3000 EUR kW-1el,peak, an optimized Si based system using conventional, available technology (etasys = 1.5%), leading to 900 EUR kW-1el,peak, a further improved system with future technology (etasys = 5%), leading to 340 EUR kW-1el,peak and a GaSb based system (etasys = 12.3% with recuperator), leading to 1900 EUR kW-1el,peak. Thus, prices of electricity from 6 to 25 EURcents kWh-1el (including gas of about 3.5 EURcents kWh-1) were calculated and compared with those of fuel cells (31 EURcents kWh-1) and gas engines (23 EURcents kWh-1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Bin; Li, Yongbao; Liu, Bo
Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
Tree value system: description and assumptions.
D.G. Briggs
1989-01-01
TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...
Kim, Myoung Soo
2012-08-01
The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.
NASA Astrophysics Data System (ADS)
Sharan, A. M.; Sankar, S.; Sankar, T. S.
1982-08-01
A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.
Dual-band plasmonic resonator based on Jerusalem cross-shaped nanoapertures
NASA Astrophysics Data System (ADS)
Cetin, Arif E.; Kaya, Sabri; Mertiri, Alket; Aslan, Ekin; Erramilli, Shyamsunder; Altug, Hatice; Turkmen, Mustafa
2015-06-01
In this paper, we both experimentally and numerically introduce a dual-resonant metamaterial based on subwavelength Jerusalem cross-shaped apertures. We numerically investigate the physical origin of the dual-resonant behavior, originating from the constituting aperture elements, through finite difference time domain calculations. Our numerical calculations show that at the dual-resonances, the aperture system supports large and easily accessible local electromagnetic fields. In order to experimentally realize the aperture system, we utilize a high-precision and lift-off free fabrication method based on electron-beam lithography. We also introduce a fine-tuning mechanism for controlling the dual-resonant spectral response through geometrical device parameters. Finally, we show the aperture system's highly advantageous far- and near-field characteristics through numerical calculations on refractive index sensitivity. The quantitative analyses on the availability of the local fields supported by the aperture system are employed to explain the grounds behind the sensitivity of each spectral feature within the dual-resonant behavior. Possessing dual-resonances with large and accessible electromagnetic fields, Jerusalem cross-shaped apertures can be highly advantageous for wide range of applications demanding multiple spectral features with strong nearfield characteristics.
Calculating the habitable zones of multiple star systems with a new interactive Web site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Tobias W. A.; Haghighipour, Nader
We have developed a comprehensive methodology and an interactive Web site for calculating the habitable zone (HZ) of multiple star systems. Using the concept of spectral weight factor, as introduced in our previous studies of the calculations of HZ in and around binary star systems, we calculate the contribution of each star (based on its spectral energy distribution) to the total flux received at the top of the atmosphere of an Earth-like planet, and use the models of the HZ of the Sun to determine the boundaries of the HZ in multiple star systems. Our interactive Web site for carryingmore » out these calculations is publicly available at http://astro.twam.info/hz. We discuss the details of our methodology and present its application to some of the multiple star systems detected by the Kepler space telescope. We also present the instructions for using our interactive Web site, and demonstrate its capabilities by calculating the HZ for two interesting analytical solutions of the three-body problem.« less
MR Imaging Based Treatment Planning for Radiotherapy of Prostate Cancer
2008-02-01
Radiotherapy, MR-based treatment planning, dosimetry, Monte Carlo dose verification, Prostate Cancer, MRI -based DRRs 16. SECURITY CLASSIFICATION...AcQPlan system Version 5 was used for the study , which is capable of performing dose calculation on both CT and MRI . A four field 3D conformal planning...prostate motion studies for 3DCRT and IMRT of prostate cancer; (2) to investigate and improve the accuracy of MRI -based treatment planning dose calculation
Density-matrix based determination of low-energy model Hamiltonians from ab initio wavefunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Changlani, Hitesh J.; Zheng, Huihuo; Wagner, Lucas K.
2015-09-14
We propose a way of obtaining effective low energy Hubbard-like model Hamiltonians from ab initio quantum Monte Carlo calculations for molecular and extended systems. The Hamiltonian parameters are fit to best match the ab initio two-body density matrices and energies of the ground and excited states, and thus we refer to the method as ab initio density matrix based downfolding. For benzene (a finite system), we find good agreement with experimentally available energy gaps without using any experimental inputs. For graphene, a two dimensional solid (extended system) with periodic boundary conditions, we find the effective on-site Hubbard U{sup ∗}/t tomore » be 1.3 ± 0.2, comparable to a recent estimate based on the constrained random phase approximation. For molecules, such parameterizations enable calculation of excited states that are usually not accessible within ground state approaches. For solids, the effective Hamiltonian enables large-scale calculations using techniques designed for lattice models.« less
NASA Astrophysics Data System (ADS)
Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.
2016-09-01
Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.
NASA Astrophysics Data System (ADS)
Zhu, Jun
Ru and Pt are candidate additional component for improving the high temperature properties of Ni-base superalloys. A thermodynamic description of the Ni-Al-Cr-Ru-Pt system, serving as an essential knowledge base for better alloy design and processing control, was developed in the present study by means of thermodynamic modeling coupled with experimental investigations of phase equilibria. To deal with the order/disorder transition occurring in the Ni-base superalloys, a physical sound model, Cluster/Site Approximation (CSA) was used to describe the fcc phases. The CSA offers computational advantages, without loss of accuracy, over the Cluster Variation Method (CVM) in the calculation of multicomponent phase diagrams. It has been successfully applied to fcc phases in calculating technologically important Ni-Al-Cr phase diagrams. Our effort in this study focused on the two key ternary systems: Ni-Al-Ru and Ni-Al-Pt. The CSA calculated Ni-Al-Ru ternary phase diagrams are in good agreement with the experimental results in the literature and from the current study. A thermodynamic description of quaternary Ni-Al-Cr-Ru was obtained based on the descriptions of the lower order systems and the calculated results agree with experimental data available in literature and in the current study. The Ni-Al-Pt system was thermodynamically modeled based on the limited experimental data available in the literature and obtained from the current study. With the help of the preliminary description, a number of alloy compositions were selected for further investigation. The information obtained was used to improve the current modeling. A thermodynamic description of the Ni-Al-Cr-Pt quaternary was then obtained via extrapolation from its constituent lower order systems. The thermodynamic description for Ni-base superalloy containing Al, Cr, Ru and Pt was obtained via extrapolation. It is believed to be reliable and useful to guide the alloy design and further experimental investigation.
Multidomain approach for calculating compressible flows
NASA Technical Reports Server (NTRS)
Cambier, L.; Chazzi, W.; Veuillot, J. P.; Viviand, H.
1982-01-01
A multidomain approach for calculating compressible flows by using unsteady or pseudo-unsteady methods is presented. This approach is based on a general technique of connecting together two domains in which hyperbolic systems (that may differ) are solved with the aid of compatibility relations associated with these systems. Some examples of this approach's application to calculating transonic flows in ideal fluids are shown, particularly the adjustment of shock waves. The approach is then applied to treating a shock/boundary layer interaction problem in a transonic channel.
Bench-Scale Silicone Process for Low-Cost CO{sub 2} Capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vipperla, Ravikumar; Yee, Michael; Steele, Ray
This report presents system and economic analysis for a carbon capture unit which uses an amino-silicone solvent for CO{sub 2} capture and sequestration (CCS) in a pulverized coal (PC) boiler. The amino-silicone solvent is based on GAP-1 with Tri-Ethylene Glycol (TEG) as a co-solvent. The report also shows results for a CCS unit based on a conventional approach using mono-ethanol amine (MEA). Models were developed for both processes and used to calculate mass and energy balances. Capital costs and energy penalty were calculated for both systems, as well as the increase in cost of electricity. The amino-silicone solvent based systemmore » demonstrates significant advantages compared to the MEA system.« less
NASA Astrophysics Data System (ADS)
Kuang, Ye; Zhao, Chun Sheng; Zhao, Gang; Tao, Jiang Chuan; Xu, Wanyun; Ma, Nan; Bian, Yu Xuan
2018-05-01
Water condensed on ambient aerosol particles plays significant roles in atmospheric environment, atmospheric chemistry and climate. Before now, no instruments were available for real-time monitoring of ambient aerosol liquid water contents (ALWCs). In this paper, a novel method is proposed to calculate ambient ALWC based on measurements of a three-wavelength humidified nephelometer system, which measures aerosol light scattering coefficients and backscattering coefficients at three wavelengths under dry state and different relative humidity (RH) conditions, providing measurements of light scattering enhancement factor f(RH). The proposed ALWC calculation method includes two steps: the first step is the estimation of the dry state total volume concentration of ambient aerosol particles, Va(dry), with a machine learning method called random forest model based on measurements of the dry
nephelometer. The estimated Va(dry) agrees well with the measured one. The second step is the estimation of the volume growth factor Vg(RH) of ambient aerosol particles due to water uptake, using f(RH) and the Ångström exponent. The ALWC is calculated from the estimated Va(dry) and Vg(RH). To validate the new method, the ambient ALWC calculated from measurements of the humidified nephelometer system during the Gucheng campaign was compared with ambient ALWC calculated from ISORROPIA thermodynamic model using aerosol chemistry data. A good agreement was achieved, with a slope and intercept of 1.14 and -8.6 µm3 cm-3 (r2 = 0.92), respectively. The advantage of this new method is that the ambient ALWC can be obtained solely based on measurements of a three-wavelength humidified nephelometer system, facilitating the real-time monitoring of the ambient ALWC and promoting the study of aerosol liquid water and its role in atmospheric chemistry, secondary aerosol formation and climate change.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Cullings, Harry M
2012-03-01
The Radiation Effects Research Foundation (RERF) uses a dosimetry system to calculate radiation doses received by the Japanese atomic bomb survivors based on their reported location and shielding at the time of exposure. The current system, DS02, completed in 2003, calculates detailed doses to 15 particular organs of the body from neutrons and gamma rays, using new source terms and transport calculations as well as some other improvements in the calculation of terrain and structural shielding, but continues to use methods from an older system, DS86, to account for body self-shielding. Although recent developments in models of the human body from medical imaging, along with contemporary computer speed and software, allow for improvement of the calculated organ doses, before undertaking changes to the organ dose calculations, it is important to evaluate the improvements that can be made and their potential contribution to RERF's research. The analysis provided here suggests that the most important improvements can be made by providing calculations for more organs or tissues and by providing a larger series of age- and sex-specific models of the human body from birth to adulthood, as well as fetal models.
The Mayak Worker Dosimetry System (MWDS-2013): Implementation of the Dose Calculations.
Zhdanov, А; Vostrotin, V; Efimov, А; Birchall, A; Puncher, M
2016-07-15
The calculation of internal doses for the Mayak Worker Dosimetry System (MWDS-2013) involved extensive computational resources due to the complexity and sheer number of calculations required. The required output consisted of a set of 1000 hyper-realizations: each hyper-realization consists of a set (1 for each worker) of probability distributions of organ doses. This report describes the hardware components and computational approaches required to make the calculation tractable. Together with the software, this system is referred to here as the 'PANDORA system'. It is based on a commercial SQL server database in a series of six work stations. A complete run of the entire Mayak worker cohort entailed a huge amount of calculations in PANDORA and due to the relatively slow speed of writing the data into the SQL server, each run took about 47 days. Quality control was monitored by comparing doses calculated in PANDORA with those in a specially modified version of the commercial software 'IMBA Professional Plus'. Suggestions are also made for increasing calculation and storage efficiency for future dosimetry calculations using PANDORA. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Code of Federal Regulations, 2010 CFR
2010-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The framework is described in...-weighted assets, calculate market risk equivalent assets, and calculate risk-based capital ratios adjusted...
Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko
2011-09-30
Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.
2011-01-01
Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients. PMID:21961846
Research on capability of detecting ballistic missile by near space infrared system
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Jiang, Wei; Jiang, Feng
2018-01-01
The infrared detection technology of ballistic missile based on near space platform can effectively make up the shortcomings of high-cost of traditional early warning satellites and the limited earth curvature of ground-based early warning radar. In terms of target detection capability, aiming at the problem that the formula of the action distance based on contrast performance ignores the background emissivity in the calculation process and the formula is only valid for the monochromatic light, an improved formula of the detecting range based on contrast performance is proposed. The near space infrared imaging system parameters are introduced, the expression of the contrastive action distance formula based on the target detection of the near space platform is deduced. The detection range of the near space infrared system for the booster stage ballistic missile skin, the tail nozzle and the tail flame is calculated. The simulation results show that the near-space infrared system has the best effect on the detection of tail-flame radiation.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Software-Based Visual Loan Calculator For Banking Industry
NASA Astrophysics Data System (ADS)
Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.
2012-03-01
industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.
Detection and quantification system for monitoring instruments
Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA
2008-08-12
A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.
Butt, Muhammad Arif; Akram, Muhammad
2016-01-01
We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Guerrero, M; Zhang, B
Purpose: To implement a comprehensive non-measurement-based verification program for patient-specific IMRT QA Methods: Based on published guidelines, a robust IMRT QA program should assess the following components: 1) accuracy of dose calculation, 2) accuracy of data transfer from the treatment planning system (TPS) to the record-and-verify (RV) system, 3) treatment plan deliverability, and 4) accuracy of plan delivery. Results: We have implemented an IMRT QA program that consist of four components: 1) an independent re-calculation of the dose distribution in the patient anatomy with a commercial secondary dose calculation program: Mobius3D (Mobius Medical Systems, Houston, TX), with dose accuracy evaluationmore » using gamma analysis, PTV mean dose, PTV coverage to 95%, and organ-at-risk mean dose; 2) an automated in-house-developed plan comparison system that compares all relevant plan parameters such as MU, MLC position, beam iso-center position, collimator, gantry, couch, field size settings, and bolus placement, etc. between the plan and the RV system; 3) use of the RV system to check the plan deliverability and further confirm using “mode-up” function on treatment console for plans receiving warning; and 4) implementation of a comprehensive weekly MLC QA, in addition to routine accelerator monthly and daily QA. Among 1200 verifications, there were 9 cases of suspicious calculations, 5 cases of delivery failure, no data transfer errors, and no failure of weekly MLC QA. These 9 suspicious cases were due to the PTV extending to the skin or to heterogeneity correction effects, which would not have been caught using phantom measurement-based QA. The delivery failure was due to the rounding variation of MLC position between the planning system and RV system. Conclusion: A very efficient, yet comprehensive, non-measurement-based patient-specific QA program has been implemented and used clinically for about 18 months with excellent results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
Calculation of {alpha}/{gamma} equilibria in SA508 grade 3 steels for intercritical heat treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B.J.; Kim, H.D.; Hong, J.H.
1998-05-01
An attempt has been made to suggest an optimum temperature for intercritical heat treatment of an SA508 grade 3 steel for nuclear pressure vessels, based on thermodynamic calculation of the {alpha}/{gamma} phase equilibria. A thermodynamic database constructed for the Fe-Mn-Ni-Mo-Cr-Si-V-Al-C-N ten-component system and an empirical criterion that the amount of reformed austenite should be around 40 pct were used for thermodynamic calculation and derivation of the optimum heat-treatment temperature, respectively. The calculated optimum temperature, 720 C, was in good agreement with an experimentally determined temperature of 725 C obtained through an independent experimental investigation of the same steel. The agreementmore » between the calculated and measured fraction of reformed austenite during the intercritical heat treatment was also confirmed. Based on the agreement between calculation and experiment, it could be concluded that thermodynamic calculations can be successfully applied to the materials and/or process design as an additive tool to the already established technology, and that the currently constructed thermodynamic database for steel systems shows an accuracy that makes such applications possible.« less
NASA Astrophysics Data System (ADS)
Li, Haohan; Wu, Yong; Zeng, Xiaojun; Wang, Xiaohan; Zhao, Daiqing
2017-06-01
Thermophysical properties, such as density, specific heat, viscosity and thermal conductivity, vary sharply near critical point. To evaluate these properties of hydrocarbons accurately is crucial to the further research of fuel system. Comparison was made by the calculating program based on four widely used equations of state (EoS), and results indicated that calculations based on the Peng-Robinson (PR) equation of state achieve better prediction accuracy among the four equations of state. Due to the small computational amount and high accuracy, the evaluation method proposed in this paper can be implemented into practical application for the design of fuel system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paratte, J.M.; Pelloni, S.; Grimm, P.
1991-04-01
This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.
Smart electric vehicle (EV) charging and grid integration apparatus and methods
Gadh, Rajit; Mal, Siddhartha; Prabhu, Shivanand; Chu, Chi-Cheng; Sheikh, Omar; Chung, Ching-Yen; He, Lei; Xiao, Bingjun; Shi, Yiyu
2015-05-05
An expert system manages a power grid wherein charging stations are connected to the power grid, with electric vehicles connected to the charging stations, whereby the expert system selectively backfills power from connected electric vehicles to the power grid through a grid tie inverter (if present) within the charging stations. In more traditional usage, the expert system allows for electric vehicle charging, coupled with user preferences as to charge time, charge cost, and charging station capabilities, without exceeding the power grid capacity at any point. A robust yet accurate state of charge (SOC) calculation method is also presented, whereby initially an open circuit voltage (OCV) based on sampled battery voltages and currents is calculated, and then the SOC is obtained based on a mapping between a previously measured reference OCV (ROCV) and SOC. The OCV-SOC calculation method accommodates likely any battery type with any current profile.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S
2012-07-01
The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.
Evaluation of audit-based performance measures for dental care plans.
Bader, J D; Shugars, D A; White, B A; Rindal, D B
1999-01-01
Although a set of clinical performance measures, i.e., a report card for dental plans, has been designed for use with administrative data, most plans do not have administrative data systems containing the data needed to calculate the measures. Therefore, we evaluated the use of a set of proxy clinical performance measures calculated from data obtained through chart audits. Chart audits were conducted in seven dental programs--three public health clinics, two dental health maintenance organizations (DHMO), and two preferred provider organizations (PPO). In all instances audits were completed by clinical staff who had been trained using telephone consultation and a self-instructional audit manual. The performance measures were calculated for the seven programs, audit reliability was assessed in four programs, and for one program the audit-based proxy measures were compared to the measures calculated using administrative data. The audit-based measures were sensitive to known differences in program performance. The chart audit procedures yielded reasonably reliable data. However, missing data in patient charts rendered the calculation of some measures problematic--namely, caries and periodontal disease assessment and experience. Agreement between administrative and audit-based measures was good for most, but not all, measures in one program. The audit-based proxy measures represent a complex but feasible approach to the calculation of performance measures for those programs lacking robust administrative data systems. However, until charts contain more complete diagnostic information (i.e., periodontal charting and diagnostic codes or reason-for-treatment codes), accurate determination of these aspects of clinical performance will be difficult.
Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E D
2017-05-01
Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team.
NASA Astrophysics Data System (ADS)
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
NASA Astrophysics Data System (ADS)
Markelov, V.; Shukalov, A.; Zharinov, I.; Kostishin, M.; Kniga, I.
2016-04-01
The use of the correction course option before aircraft take-off after inertial navigation system (INS) inaccurate alignment based on the platform attitude-and-heading reference system in azimuth is considered in the paper. A course correction is performed based on the track angle defined by the information received from the satellite navigation system (SNS). The course correction includes a calculated track error definition during ground taxiing along straight sections before take-off with its input in the onboard digital computational system like amendment for using in the current flight. The track error calculation is performed by the statistical evaluation of the track angle comparison defined by the SNS information with the current course measured by INS for a given number of measurements on the realizable time interval. The course correction testing results and recommendation application are given in the paper. The course correction based on the information from SNS can be used for improving accuracy characteristics for determining an aircraft path after making accelerated INS preparation concerning inaccurate initial azimuth alignment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... patient utilization calendar year as identified from Medicare claims is calendar year 2007. (4) Wage index... calculating the per-treatment base rate for 2011 are as follows: (1) Per patient utilization in CY 2007, 2008..., 2008 or 2009 to determine the year with the lowest per patient utilization. (2) Update of per treatment...
Building Interactive Simulations in Web Pages without Programming.
Mailen Kootsey, J; McAuley, Grant; Bernal, Julie
2005-01-01
A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.
Cost accounting and public reimbursement schemes in Spanish hospitals.
Sánchez-Martínez, Fernando; Abellán-Perpiñán, José-María; Martínez-Pérez, Jorge-Eduardo; Puig-Junoy, Jaume
2006-08-01
The objective of this paper is to provide a description and analysis of the main costing and pricing (reimbursement) systems employed by hospitals in the Spanish National Health System (NHS). Hospitals cost calculations are mostly based on a full costing approach as opposite to other systems like direct costing or activity based costing. Regional and hospital differences arise on the method used to allocate indirect costs to cost centres and also on the approach used to measure resource consumption. Costs are typically calculated by disaggregating expenditure and allocating it to cost centres, and then to patients and DRGs. Regarding public reimbursement systems, the impression is that unit costs are ignored, except for certain type of high technology processes and treatments.
NASA Astrophysics Data System (ADS)
Dadashev, R. Kh.; Dzhambulatov, R. S.; Mezhidov, V. Kh.; Elimkhanov, D. Z.
2018-05-01
Concentration dependences of the surface tension and density of solutions of three-component acetone-ethanol-water systems and the bounding binary systems at 273 K are studied. The molar volume, adsorption, and composition of surface layers are calculated. Experimental data and calculations show that three-component solutions are close to ideal ones. The surface tensions of these solutions are calculated using semi-empirical and theoretical equations. Theoretical equations qualitatively convey the concentration dependence of surface tension. A semi-empirical method based on the Köhler equation allows us to predict the concentration dependence of surface tension within the experimental error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.
2015-12-01
We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated atmore » various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
PVWatts ® Calculator: India (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The PVWatts ® Calculator for India was released by the National Renewable Energy Laboratory in 2013. The online tool estimates electricity production and the monetary value of that production of grid-connected roof- or ground-mounted crystalline silicon photovoltaics systems based on a few simple inputs. This factsheet provides a broad overview of the PVWatts ® Calculator for India.
NASA Technical Reports Server (NTRS)
Homan, D. J.
1977-01-01
A computer program written to calculate the proximity aerodynamic force and moment coefficients of the Orbiter/Shuttle Carrier Aircraft (SCA) vehicles based on flight instrumentation is described. The ground reduced aerodynamic coefficients and instrumentation errors (GRACIE) program was developed as a tool to aid in flight test verification of the Orbiter/SCA separation aerodynamic data base. The program calculates the force and moment coefficients of each vehicle in proximity to the other, using the load measurement system data, flight instrumentation data and the vehicle mass properties. The uncertainty in each coefficient is determined, based on the quoted instrumentation accuracies. A subroutine manipulates the Orbiter/747 Carrier Separation Aerodynamic Data Book to calculate a comparable set of predicted coefficients for comparison to the calculated flight test data.
Trust-based information system architecture for personal wellness.
Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd
2014-01-01
Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
PVWatts Version 1 Technical Reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2013-10-01
The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.
Parametric Studies of the Ejector Process within a Turbine-Based Combined-Cycle Propulsion System
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Walker, James F.; Trefny, Charles J.
1999-01-01
Performance characteristics of the ejector process within a turbine-based combined-cycle (TBCC) propulsion system are investigated using the NPARC Navier-Stokes code. The TBCC concept integrates a turbine engine with a ramjet into a single propulsion system that may efficiently operate from takeoff to high Mach number cruise. At the operating point considered, corresponding to a flight Mach number of 2.0, an ejector serves to mix flow from the ramjet duct with flow from the turbine engine. The combined flow then passes through a diffuser where it is mixed with hydrogen fuel and burned. Three sets of fully turbulent Navier-Stokes calculations are compared with predictions from a cycle code developed specifically for the TBCC propulsion system. A baseline ejector system is investigated first. The Navier-Stokes calculations indicate that the flow leaving the ejector is not completely mixed, which may adversely affect the overall system performance. Two additional sets of calculations are presented; one set that investigated a longer ejector region (to enhance mixing) and a second set which also utilized the longer ejector but replaced the no-slip surfaces of the ejector with slip (inviscid) walls in order to resolve discrepancies with the cycle code. The three sets of Navier-Stokes calculations and the TBCC cycle code predictions are compared to determine the validity of each of the modeling approaches.
System and method for progressive band selection for hyperspectral images
NASA Technical Reports Server (NTRS)
Fisher, Kevin (Inventor)
2013-01-01
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
Anthony, Bedford; Schembri, Adrian J.
2006-01-01
Australian Rules Football, governed by the Australian Football League (AFL) is the most popular winter sport played in Australia. Like North American team based leagues such as the NFL, NBA and NHL, the AFL uses a draft system for rookie players to join a team’s list. The existing method of allocating draft selections in the AFL is simply based on the reverse order of each team’s finishing position for that season, with teams winning less than or equal to 5 regular season matches obtaining an additional early round priority draft pick. Much criticism has been levelled at the existing system since it rewards losing teams and does not encourage poorly performing teams to win matches once their season is effectively over. We propose a probability-based system that allocates a score based on teams that win ‘unimportant’ matches (akin to Carl Morris’ definition of importance). We base the calculation of ‘unimportance’ on the likelihood of a team making the final eight following each round of the season. We then investigate a variety of approaches based on the ‘unimportance’ measure to derive a score for ‘unimportant’ and unlikely wins. We explore derivatives of this system, compare past draft picks with those obtained under our system, and discuss the attractiveness of teams knowing the draft reward for winning each match in a season. Key Points Draft choices are allocated using a probabilistic approach that rewards teams for winning unimportant matches. The method is based upon Carl Morris’ Importance and probabilistic calculations of making the finals. The importance of a match is calculated probabilistically to arrive at a DScore. Higher DScores are weighted towards teams winning unimportant matches which in turn lead to higher draft selections. Provides an alternative to current draft systems that are based on ‘losing to win’. PMID:24357945
Mutual Information Rate and Bounds for It
Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso
2012-01-01
The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809
Sixth-order wave aberration theory of ultrawide-angle optical systems.
Lu, Lijun; Cao, Yiqing
2017-10-20
In this paper, we develop sixth-order wave aberration theory of ultrawide-angle optical systems like fisheye lenses. Based on the concept and approach to develop wave aberration theory of plane-symmetric optical systems, we first derive the sixth-order intrinsic wave aberrations and the fifth-order ray aberrations; second, we present a method to calculate the pupil aberration of such kind of optical systems to develop the extrinsic aberrations; third, the relation of aperture-ray coordinates between adjacent optical surfaces is fitted with the second-order polynomial to improve the calculation accuracy of the wave aberrations of a fisheye lens with a large acceptance aperture. Finally, the resultant aberration expressions are applied to calculate the aberrations of two design examples of fisheye lenses; the calculation results are compared with the ray-tracing ones with Zemax software to validate the aberration expressions.
SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Treecode-based generalized Born method
NASA Astrophysics Data System (ADS)
Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao
2011-02-01
We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.
Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi
2017-11-01
While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.
Performance calculation and simulation system of high energy laser weapon
NASA Astrophysics Data System (ADS)
Wang, Pei; Liu, Min; Su, Yu; Zhang, Ke
2014-12-01
High energy laser weapons are ready for some of today's most challenging military applications. Based on the analysis of the main tactical/technical index and combating process of high energy laser weapon, a performance calculation and simulation system of high energy laser weapon was established. Firstly, the index decomposition and workflow of high energy laser weapon was proposed. The entire system was composed of six parts, including classical target, platform of laser weapon, detect sensor, tracking and pointing control, laser atmosphere propagation and damage assessment module. Then, the index calculation modules were designed. Finally, anti-missile interception simulation was performed. The system can provide reference and basis for the analysis and evaluation of high energy laser weapon efficiency.
AQUIS: A PC-based air inventory and permit manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Huber, C.C.; Tschanz, J.
1992-01-01
The Air Quality Utility Information System (AQUIS) was developed to calculate and track sources, emissions, stacks, permits, and related information. The system runs on IBM-compatible personal computers with dBASE IV and tracks more than 1,200 data items distributed among various source categories. AQUIS is currently operating at nine US Air Force facilities that have up to 1,000 sources. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user-specified information. In addition to six criteria pollutants, AQUIS calculates compound-specific emissions and allows users to enter their own emissionmore » estimates.« less
Wang, Lilie; Ding, George X
2018-06-12
Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
Calculation of Sensitivity Derivatives in an MDAO Framework
NASA Technical Reports Server (NTRS)
Moore, Kenneth T.
2012-01-01
During gradient-based optimization of a system, it is necessary to generate the derivatives of each objective and constraint with respect to each design parameter. If the system is multidisciplinary, it may consist of a set of smaller "components" with some arbitrary data interconnection and process work ow. Analytical derivatives in these components can be used to improve the speed and accuracy of the derivative calculation over a purely numerical calculation; however, a multidisciplinary system may include both components for which derivatives are available and components for which they are not. Three methods to calculate the sensitivity of a mixed multidisciplinary system are presented: the finite difference method, where the derivatives are calculated numerically; the chain rule method, where the derivatives are successively cascaded along the system's network graph; and the analytic method, where the derivatives come from the solution of a linear system of equations. Some improvements to these methods, to accommodate mixed multidisciplinary systems, are also presented; in particular, a new method is introduced to allow existing derivatives to be used inside of finite difference. All three methods are implemented and demonstrated in the open-source MDAO framework OpenMDAO. It was found that there are advantages to each of them depending on the system being solved.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
NASA Astrophysics Data System (ADS)
Morton, Daniel R.
Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation method could potentially be used to predict patients' exposure factors before their treatment begins, thus preventing the need for multiple exposures.
Promoting Graphical Thinking: Using Temperature and a Graphing Calculator to Teach Kinetics Concepts
ERIC Educational Resources Information Center
Cortes-Figueroa, Jose E.; Moore-Russo, Deborah A.
2004-01-01
A combination of graphical thinking with chemical and physical theories in the classroom is encouraged by using the Calculator-Based Laboratory System (CBL) with a temperature sensor and graphing calculator. The theory of first-order kinetics is logically explained with the aid of the cooling or heating of the metal bead of the CBL's temperature…
Pan, Wenxiao; Daily, Michael; Baker, Nathan A.
2015-05-07
Background: The calculation of diffusion-controlled ligand binding rates is important for understanding enzyme mechanisms as well as designing enzyme inhibitors. Methods: We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) BC, is considered on the reactive boundaries. This new BC treatment allows for the analysis of enzymes with “imperfect” reaction rates. Results: The numerical method is first verified in simple systems and thenmore » applied to the calculation of ligand binding to a mouse acetylcholinesterase (mAChE) monomer. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Conclusions: Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
Leak detection by mass balance effective for Norman Wells line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, J.C.P.
Mass-balance calculations for leak detection have been shown as effective as a leading software system, in a comparison based on a major Canadian crude-oil pipeline. The calculations and NovaCorp`s Leakstop software each detected 4% (approximately) or greater leaks on Interprovincial Pipe Line (IPL) Inc.`s Norman Wells pipeline. Insufficient data exist to assess performances of the two methods for leaks smaller than 4%. Pipeline leak detection using such software-based systems are common. Their effectiveness is measured by how small and how quickly a leak can be detected. Algorithms used and measurement uncertainties determine leak detectability.
Dynamic Buffer Capacity in Acid-Base Systems.
Michałowska-Kaczmarczyk, Anna M; Michałowski, Tadeusz
The generalized concept of 'dynamic' buffer capacity β V is related to electrolytic systems of different complexity where acid-base equilibria are involved. The resulting formulas are presented in a uniform and consistent form. The detailed calculations are related to two Britton-Robinson buffers, taken as examples.
Exact Exchange calculations for periodic systems: a real space approach
NASA Astrophysics Data System (ADS)
Natan, Amir; Marom, Noa; Makmal, Adi; Kronik, Leeor; Kuemmel, Stephan
2011-03-01
We present a real-space method for exact-exchange Kohn-Sham calculations of periodic systems. The method is based on self-consistent solutions of the optimized effective potential (OEP) equation on a three-dimensional non-orthogonal grid, using norm conserving pseudopotentials. These solutions can be either exact, using the S-iteration approach, or approximate, using the Krieger, Li, and Iafrate (KLI) approach. We demonstrate, using a variety of systems, the importance of singularity corrections and use of appropriate pseudopotentials.
Design of new face-centered cubic high entropy alloys by thermodynamic calculation
NASA Astrophysics Data System (ADS)
Choi, Won-Mi; Jung, Seungmun; Jo, Yong Hee; Lee, Sunghak; Lee, Byeong-Joo
2017-09-01
A new face-centered cubic (fcc) high entropy alloy system with non-equiatomic compositions has been designed by utilizing a CALculation of PHAse Diagram (CALPHAD) - type thermodynamic calculation technique. The new alloy system is based on the representative fcc high entropy alloy, the Cantor alloy which is an equiatomic Co- Cr-Fe-Mn-Ni five-component alloy, but fully or partly replace the cobalt by vanadium and is of non-equiatomic compositions. Alloy compositions expected to have an fcc single-phase structure between 700 °C and melting temperatures are proposed. All the proposed alloys are experimentally confirmed to have the fcc single-phase during materials processes (> 800 °C), through an X-ray diffraction analysis. It is shown that there are more chances to find fcc single-phase high entropy alloys if paying attention to non-equiatomic composition regions and that the CALPHAD thermodynamic calculation can be an efficient tool for it. An alloy design technique based on thermodynamic calculation is demonstrated and the applicability and limitation of the approach as a design tool for high entropy alloys is discussed.
The Sorghum Headworm Calculator: A speedy tool for headworm management
USDA-ARS?s Scientific Manuscript database
The Sorghum Headworm Calculator is an interactive decision support system for sorghum headworm management. It was designed to be easily accessible and usable. It provides users with organized information on identification, sampling, and management using images, descriptions and research-based mana...
Detection of cat-eye effect echo based on unit APD
NASA Astrophysics Data System (ADS)
Wu, Dong-Sheng; Zhang, Peng; Hu, Wen-Gang; Ying, Jia-Ju; Liu, Jie
2016-10-01
The cat-eye effect echo of optical system can be detected based on CCD, but the detection range is limited within several kilometers. In order to achieve long-range even ultra-long-range detection, it ought to select APD as detector because of the high sensitivity of APD. The detection system of cat-eye effect echo based on unit APD is designed in paper. The implementation scheme and key technology of the detection system is presented. The detection performances of the detection system including detection range, detection probability and false alarm probability are modeled. Based on the model, the performances of the detection system are analyzed using typical parameters. The results of numerical calculation show that the echo signal-to-noise ratio is greater than six, the detection probability is greater than 99.9% and the false alarm probability is less tan 0.1% within 20 km detection range. In order to verify the detection effect, we built the experimental platform of detection system according to the design scheme and carry out the field experiments. The experimental results agree well with the results of numerical calculation, which prove that the detection system based on the unit APD is feasible to realize remote detection for cat-eye effect echo.
NASA Technical Reports Server (NTRS)
Camarda, C. J.; Adelman, H. M.
1984-01-01
The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.
2012-03-01
We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.
Mitigation of Engine Inlet Distortion Through Adjoint-Based Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Rallabhandi, Sriram; Nielsen, Eric J.; Diskin, Boris
2017-01-01
The adjoint-based design capability in FUN3D is extended to allow efficient gradient- based optimization and design of concepts with highly integrated aero-propulsive systems. A circumferential distortion calculation, along with the derivatives needed to perform adjoint-based design, have been implemented in FUN3D. This newly implemented distortion calculation can be used not only for design but also to drive the existing mesh adaptation process and reduce the error associated with the fan distortion calculation. The design capability is demonstrated by the shape optimization of an in-house aircraft concept equipped with an aft fuselage propulsor. The optimization objective is the minimization of flow distortion at the aerodynamic interface plane of this aft fuselage propulsor.
Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E.D.
2017-01-01
Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined “norm.” Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team. PMID:27832032
Calculation of the surface tension of liquid Ga-based alloys
NASA Astrophysics Data System (ADS)
Dogan, Ali; Arslan, Hüseyin
2018-05-01
As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.
A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES
A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromagnetic properties of the model are symmetric with respect ...
Dash, Bibek
2018-04-26
The present work deals with a density functional theory (DFT) study of porous organic framework materials containing - groups for CO 2 capture. In this study, first principle calculations were performed for CO 2 adsorption using N-containing covalent organic framework (COFs) models. Ab initio and DFT-based methods were used to characterize the N-containing porous model system based on their interaction energies upon complexing with CO 2 and nitrogen gas. Binding energies (BEs) of CO 2 and N 2 molecules with the polymer framework were calculated with DFT methods. Hybrid B3LYP and second order MP2 methods combined with of Pople 6-31G(d,p) and correlation consistent basis sets cc-pVDZ, cc-pVTZ and aug-ccVDZ were used to calculate BEs. The effect of linker groups in the designed covalent organic framework model system on the CO 2 and N 2 interactions was studied using quantum calculations.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Real-time POD-CFD Wind-Load Calculator for PV Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huayamave, Victor; Divo, Eduardo; Ceballos, Andres
The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less
[Calculation of optic system of superfine medical endoscopes based on gradient elements].
Díakonov, S Iu; Korolev, A V
1994-01-01
The application of gradient optic elements to rigid endoscopes decreases their diameter to 1.5-2.0 mm. The given mathematical dependences determine aperture and field characteristics, focus and focal segments, resolution of the optic systems based on gradient optics. Parameters of the gradient optic systems for superfine medical endoscopes are characterized and their practical application is shown.
NASA Astrophysics Data System (ADS)
Bakker, Ronald J.
2018-06-01
The program AqSo_NaCl has been developed to calculate pressure - molar volume - temperature - composition (p-V-T-x) properties, enthalpy, and heat capacity of the binary H2O-NaCl system. The algorithms are designed in BASIC within the Xojo programming environment, and can be operated as stand-alone project with Macintosh-, Windows-, and Unix-based operating systems. A series of ten self-instructive interfaces (modules) are developed to calculate fluid inclusion properties and pore fluid properties. The modules may be used to calculate properties of pure NaCl, the halite-liquidus, the halite-vapourus, dew-point and bubble-point curves (liquid-vapour), critical point, and SLV solid-liquid-vapour curves at temperatures above 0.1 °C (with halite) and below 0.1 °C (with ice or hydrohalite). Isochores of homogeneous fluids and unmixed fluids in a closed system can be calculated and exported to a.txt file. Isochores calculated for fluid inclusions can be corrected according to the volumetric properties of quartz. Microthermometric data, i.e. dissolution temperatures and homogenization temperatures, can be used to calculated bulk fluid properties of fluid inclusions. Alternatively, in the absence of total homogenization temperature the volume fraction of the liquid phase in fluid inclusions can be used to obtain bulk properties.
Structural analyses of a rigid pavement overlaying a sub-surface void
NASA Astrophysics Data System (ADS)
Adam, Fatih Alperen
Pavement failures are very hazardous for public safety and serviceability. These failures in pavements are mainly caused by subsurface voids, cracks, and undulation at the slab-base interface. On the other hand, current structural analysis procedures for rigid pavement assume that the slab-base interface is perfectly planar and no imperfections exist in the sub-surface soil. This assumption would be violated if severe erosion were to occur due to inadequate drainage, thermal movements, and/or mechanical loading. Until now, the effect of erosion was only considered in the faulting performance model, but not with regards to transverse cracking at the mid-slab edge. In this research, the bottom up fatigue cracking potential, caused by the combined effects of wheel loading and a localized imperfection in the form of a void below the mid-slab edge, is studied. A robust stress and surface deflection analysis was also conducted to evaluate the influence of a sub-surface void on layer moduli back-calculation. Rehabilitative measures were considered, which included a study on overlay and fill remediation. A series regression of equations was proposed that provides a relationship between void size, layer moduli stiffness, and the overlay thickness required to reduce the stress to its original pre-void level. The effect of the void on 3D pavement crack propagation was also studied under a single axle load. The amplifications to the stress intensity was shown to be high but could be mitigated substantially if stiff material is used to fill the void and impede crack growth. The pavement system was modeled using the commercial finite element modeling program Abaqus RTM. More than 10,000 runs were executed to do the following analysis: stress analysis of subsurface voids, E-moduli back-calculation of base layer, pavement damage calculations of Beaumont, TX, overlay thickness estimations, and mode I crack analysis. The results indicate that the stress and stress intensity are, on average, amplified considerably: 80% and 150%, respectively, by the presence of the void and more severe in a bonded pavement system compared to an un-bonded system. The sub-surface void also significantly affects the layer moduli back-calculation. The equivalent moduli of the layers are reduced considerably when a sub-surface void is present. However, the results indicate the back-calculated moduli derived using surface deflection, and longitudinal stress basins did not yield equivalent layer moduli under mechanical loading; the back-calculated deflection-based moduli were larger than the stress-based moduli, leading to stress calculations that were lower than those found in the real system.
NASA Astrophysics Data System (ADS)
Shiau, Lie-Ding
2016-09-01
The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.
Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K
2013-09-20
The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.
Implementation of Online Promethee Method for Poor Family Change Rate Calculation
NASA Astrophysics Data System (ADS)
Aji, Dhady Lukito; Suryono; Widodo, Catur Edi
2018-02-01
This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.
AQUIS: A PC-based source information manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Huber, C.C.; Tschanz, J.
1993-05-01
The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less
AQUIS: A PC-based source information manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Huber, C.C.; Tschanz, J.
1993-01-01
The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less
Polarization-resolved sensing with tilted fiber Bragg gratings: theory and limits of detection
NASA Astrophysics Data System (ADS)
Bialiayeu, Aliaksandr; Ianoul, Anatoli; Albert, Jacques
2015-08-01
Polarization based sensing with tilted fiber Bragg grating (TFBG) sensors is analysed theoretically by two alternative approaches. The first method is based on tracking the grating transmission for two orthogonal states of linear polarized light that are extracted from the measured Jones matrix or Stokes vectors of the TFBG transmission spectra. The second method is based on the measurements along the system principle axes and polarization dependent loss (PDL) parameter, also calculated from measured data. It is shown that the frequent crossing of the Jones matrix eigenvalues as a function of wavelength leads to a non-physical interchange of the calculated principal axes; a method to remove this unwanted mathematical artefact and to restore the order of the system eigenvalues and the corresponding principal axes is provided. A comparison of the two approaches reveals that the PDL method provides a smaller standard deviation and therefore lower limit of detection in refractometric sensing. Furthermore, the polarization analysis of the measured spectra allows for the identification of the principal states of polarization of the sensor system and consequentially for the calculation of the transmission spectrum for any incident polarization state. The stability of the orientation of the system principal axes is also investigated as a function of wavelength.
Non-Markovianity quantifier of an arbitrary quantum process
NASA Astrophysics Data System (ADS)
Debarba, Tiago; Fanchini, Felipe F.
2017-12-01
Calculating the degree of non-Markovianity of a quantum process, for a high-dimensional system, is a difficult task given complex maximization problems. Focusing on the entanglement-based measure of non-Markovianity we propose a numerically feasible quantifier for finite-dimensional systems. We define the non-Markovianity measure in terms of a class of entanglement quantifiers named witnessed entanglement which allow us to write several entanglement based measures of non-Markovianity in a unique formalism. In this formalism, we show that the non-Markovianity, in a given time interval, can be witnessed by calculating the expectation value of an observable, making it attractive for experimental investigations. Following this property we introduce a quantifier base on the entanglement witness in an interval of time; we show that measure is a bonafide measure of non-Markovianity. In our example, we use the generalized robustness of entanglement, an entanglement measure that can be readily calculated by a semidefinite programming method, to study impurity atoms coupled to a Bose-Einstein condensate.
Calculation of nuclear spin-spin coupling constants using frozen density embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Götz, Andreas W., E-mail: agoetz@sdsc.edu; Autschbach, Jochen; Visscher, Lucas, E-mail: visscher@chem.vu.nl
2014-03-14
We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects inmore » the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, S; Kaurin, D; Sweeney, L
2014-06-01
Purpose: Our institution uses a manual laser-based system for primary localization and verification during radiation treatment of left-sided breast cancer patients using deep inspiration breath hold (DIBH). This primary system was compared with sternum-placed Calypso(R) beacons (Varian Medical Systems, CA). Only intact breast patients are considered for this analysis. Methods: During computed tomography (CT) simulation, patients have BB and Calypso(R) surface beacons positioned sternally and marked for free-breathing and DIBH CTs. During dosimetry planning, BB longitudinal displacement between free breathing and DIBH CT determines laser mark (BH mark) location. Calypso(R) beacon locations from the DIBH CT are entered at themore » Tracking Station. During Linac simulation and treatment, patients inhale until the cross-hair and/or lasers coincide with the BH Mark, which can be seen using our high quality cameras (Pelco, CA). Daily Calypso(R) displacement values (difference from the DIBH-CT-based plan) are recorded.The displacement mean and standard deviation was calculated for each patient (77 patients, 1845 sessions). An aggregate mean and standard deviation was calculated weighted by the number of patient fractions.Some patients were shifted based on MV ports. A second data set was calculated with Calypso(R) values corrected by these shifts. Results: Mean displacement values indicate agreement within 1±3mm, with improvement for shifted data (Table). Conclusion: Both unshifted and shifted data sets show the Calypso(R) system coincides with the laser system within 1±3mm, demonstrating either localization/verification system will Resultin similar clinical outcomes. Displacement value uncertainty unilaterally reduces when shifts are taken into account.« less
NASA Astrophysics Data System (ADS)
Ueunten, Kevin K.
With the scheduled 30 September 2015 integration of Unmanned Aerial System (UAS) into the national airspace, the Federal Aviation Administration (FAA) is concerned with UAS capabilities to sense and avoid conflicts. Since the operator is outside the cockpit, the proposed collision awareness plugin (CAPlugin), based on probability and error propagation, conservatively predicts potential conflicts with other aircraft and airspaces, thus increasing the operator's situational awareness. The conflict predictions are calculated using a forward state estimator (FSE) and a conflict calculator. Predicting an aircraft's position, modeled as a mixed Gaussian distribution, is the FSE's responsibility. Furthermore, the FSE supports aircraft engaged in the following three flight modes: free flight, flight path following and orbits. The conflict calculator uses the FSE result to calculate the conflict probability between an aircraft and airspace or another aircraft. Finally, the CAPlugin determines the highest conflict probability and warns the operator. In addition to discussing the FSE free flight, FSE orbit and the airspace conflict calculator, this thesis describes how each algorithm is implemented and tested. Lastly two simulations demonstrates the CAPlugin's capabilities.
New Approaches and Applications for Monte Carlo Perturbation Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES
A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromag-netic properties of the model are symmetric with respect...
NASA Technical Reports Server (NTRS)
Erickson, W. K.; Hofman, L. B.; Donovan, W. E.
1984-01-01
Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.
Impact of the circulation system on the energy balance of the building
NASA Astrophysics Data System (ADS)
Polarczyk, Iwona; Fijewski, Michał
2017-11-01
The efficiency of the hot water system is one of the factors necessary to determine the overall efficiency of the building. From the calculative point of view, it is easy to make. The article presents how working of the circulation system has an influence on the efficiency of domestic hot water system. The differences in the results was presented and based on calculations of various methods, the measurements results was also taken into account. The attention was especially paid to the possibility of using ultrasonic flowmeter for measuring the flow and energy.
Design and Analysis of Hydrostatic Transmission System
NASA Astrophysics Data System (ADS)
Mistry, Kayzad A.; Patel, Bhaumikkumar A.; Patel, Dhruvin J.; Parsana, Parth M.; Patel, Jitendra P.
2018-02-01
This study develops a hydraulic circuit to drive a conveying system dealing with heavy and delicate loads. Various safety circuits have been added in order to ensure stable working at high pressure and precise controlling. Here we have shown the calculation procedure based on an arbitrarily selected load. Also the circuit design and calculations of various components used is depicted along with the system simulation. The results show that the system is stable and efficient enough to transmit heavy loads by functioning of the circuit. By this information, one can be able to design their own hydrostatic circuits for various heavy loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Chen, S
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less
Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis
NASA Astrophysics Data System (ADS)
Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro
2017-04-01
The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin
A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shieldmore » was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.« less
Methods and systems for monitoring a solid-liquid interface
Stoddard, Nathan G.; Clark, Roger F.; Kary, Tim
2010-07-20
Methods and systems are provided for monitoring a solid-liquid interface, including providing a vessel configured to contain an at least partially melted material; detecting radiation reflected from a surface of a liquid portion of the at least partially melted material that is parallel with the liquid surface; measuring a disturbance on the surface; calculating at least one frequency associated with the disturbance; and determining a thickness of the liquid portion based on the at least one frequency, wherein the thickness is calculated based on.times. ##EQU00001## where g is the gravitational constant, w is the horizontal width of the liquid, and f is the at least one frequency.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.
2012-01-01
We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.
NASA Astrophysics Data System (ADS)
Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng
2018-03-01
The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.
Oxygen vacancy effects in HfO2-based resistive switching memory: First principle study
NASA Astrophysics Data System (ADS)
Dai, Yuehua; Pan, Zhiyong; Wang, Feifei; Li, Xiaofeng
2016-08-01
The work investigated the shape and orientation of oxygen vacancy clusters in HfO2-base resistive random access memory (ReRAM) by using the first-principle method based on the density functional theory. Firstly, the formation energy of different local Vo clusters was calculated in four established orientation systems. Then, the optimized orientation and charger conductor shape were identified by comparing the isosurface plots of partial charge density, formation energy, and the highest isosurface value of oxygen vacancy. The calculated results revealed that the [010] orientation was the optimal migration path of Vo, and the shape of system D4 was the best charge conductor in HfO2, which effectively influenced the SET voltage, formation voltage and the ON/OFF ratio of the device. Afterwards, the PDOS of Hf near Vo and total density of states of the system D4_010 were obtained, revealing the composition of charge conductor was oxygen vacancy instead of metal Hf. Furthermore, the migration barriers of the Vo hopping between neighboring unit cells were calculated along four different orientations. The motion was proved along [010] orientation. The optimal circulation path for Vo migration in the HfO2 super-cell was obtained.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Using time-dependent density functional theory in real time for calculating electronic transport
NASA Astrophysics Data System (ADS)
Schaffhauser, Philipp; Kümmel, Stephan
2016-01-01
We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.
[Research on the emission spectrum of NO molecule's γ-band system by corona discharge].
Zhai, Xiao-dong; Ding, Yan-jun; Peng, Zhi-min; Luo, Rui
2012-05-01
The optical emission spectrum of the gamma-band system of NO molecule, A2 sigma+ --> X2 pi(r), has been analyzed and calculated based on the energy structure of NO molecule' doublet states. By employing the theory of diatomic molecular Spectra, some key parameters of equations for the radiative transition intensity were evaluated theoretically, including the potentials of the doublet states of NO molecule's upper and lower energy levels, the electronic transition moments calculated by using r-centroid approximation method, and the Einstein coefficient of different vibrational and rotational levels. The simulated spectrum of the gamma-band system was calculated as a function of different vibrational and rotational temperature. Compared to the theoretical spectroscopy, the measured results were achieved from corona discharge experiments of NO and N2. The vibrational and rotational temperatures were determined approximately by fitting the measured spectral intensities with the calculated ones.
Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.
Gapsys, Vytautas; de Groot, Bert L
2017-12-12
Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2014-09-01
The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
NASA Astrophysics Data System (ADS)
Tao, Jiangchuan; Zhao, Chunsheng; Kuang, Ye; Zhao, Gang; Shen, Chuanyang; Yu, Yingli; Bian, Yuxuan; Xu, Wanyun
2018-02-01
The number concentration of cloud condensation nuclei (CCN) plays a fundamental role in cloud physics. Instrumentations of direct measurements of CCN number concentration (NCCN) based on chamber technology are complex and costly; thus a simple way for measuring NCCN is needed. In this study, a new method for NCCN calculation based on measurements of a three-wavelength humidified nephelometer system is proposed. A three-wavelength humidified nephelometer system can measure the aerosol light-scattering coefficient (σsp) at three wavelengths and the light-scattering enhancement factor (fRH). The Ångström exponent (Å) inferred from σsp at three wavelengths provides information on mean predominate aerosol size, and hygroscopicity parameter (κ) can be calculated from the combination of fRH and Å. Given this, a lookup table that includes σsp, κ and Å is established to predict NCCN. Due to the precondition for the application, this new method is not suitable for externally mixed particles, large particles (e.g., dust and sea salt) or fresh aerosol particles. This method is validated with direct measurements of NCCN using a CCN counter on the North China Plain. Results show that relative deviations between calculated NCCN and measured NCCN are within 30 % and confirm the robustness of this method. This method enables simplerNCCN measurements because the humidified nephelometer system is easily operated and stable. Compared with the method using a CCN counter, another advantage of this newly proposed method is that it can obtain NCCN at lower supersaturations in the ambient atmosphere.
ERIC Educational Resources Information Center
Cepriá, Gemma; Salvatella, Luis
2014-01-01
All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…
New approach to isometric transformations in oblique local coordinate systems of reference
NASA Astrophysics Data System (ADS)
Stępień, Grzegorz; Zalas, Ewa; Ziębka, Tomasz
2017-12-01
The research article describes a method of isometric transformation and determining an exterior orientation of a measurement instrument. The method is based on a designation of a "virtual" translation of two relative oblique orthogonal systems to a common, known in the both systems, point. The relative angle orientation of the systems does not change as each of the systems is moved along its axis. The next step is the designation of the three rotation angles (e.g. Tait-Bryan or Euler angles), transformation of the system convoluted at the calculated angles and moving the system to the initial position where the primary coordinate system was. This way eliminates movements of the systems from the calculations and makes it possible to calculate angles of mutual rotation angles of two orthogonal systems primarily involved in the movement. The research article covers laboratory calculations for simulated data. The accuracy of the results is 10-6 m (10-3 regarding the accuracy of the input data). This confi rmed the correctness of the assumed calculation method. In the following step the method was verifi ed under fi eld conditions, where the accuracy of the method raised to 0.003 m. The proposed method enabled to make the measurements with the oblique and uncentered instrument, e.g. total station instrument set over an unknown point. This is the reason why the method was named by the authors as Total Free Station - TFS. The method may be also used for isometric transformations for photogrammetric purposes.
NASA Astrophysics Data System (ADS)
Hawes, D. H.; Langley, R. S.
2018-01-01
Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.
2012-01-01
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the “one hospital” approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the “one hospital” model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital’s cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level. PMID:22935314
Vogl, Matthias
2012-08-30
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the "one hospital" approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the "one hospital" model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital's cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level.
Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.
Hung, Chung-Wen; Hsu, Wen-Ting
2018-05-23
Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
NASA Technical Reports Server (NTRS)
Carreno, Victor
2006-01-01
This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
The grout/glass performance assessment code system (GPACS) with verification and benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.
1994-12-01
GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less
A Study on Multi-Swing Stability Analysis of Power System using Damping Rate Inversion
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Morii, Yuki; Oyama, Tsutomu; Hashiguchi, Takuhei; Goda, Tadahiro; Nomiyama, Fumitoshi; Kosugi, Narifumi
In recent years, much attention is paid to the nonlinear analysis method in the field of stability analysis of power systems. Especially for the multi-swing stability analysis, the unstable limit cycle has an important meaning as a stability margin. It is required to develop a high speed calculation method of stability boundary regarding multi-swing stability because the real-time calculation of ATC is necessary to realize the flexible wheeling trades. Therefore, the authors have developed a new method which can calculate the unstable limit cycle based on damping rate inversion method. Using the unstable limit cycle, it is possible to predict the multi-swing stability at the time when the fault transmission line is reclosed. The proposed method is tested in Lorenz equation, single-machine infinite-bus system model and IEEJ WEST10 system model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
Zhang, Gaigong; Lin, Lin; Hu, Wei; ...
2017-01-27
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Gaigong; Lin, Lin; Hu, Wei
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less
NASA Astrophysics Data System (ADS)
Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.
2017-04-01
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
Implementation of a method for calculating temperature-dependent resistivities in the KKR formalism
NASA Astrophysics Data System (ADS)
Mahr, Carsten E.; Czerner, Michael; Heiliger, Christian
2017-10-01
We present a method to calculate the electron-phonon induced resistivity of metals in scattering-time approximation based on the nonequilibrium Green's function formalism. The general theory as well as its implementation in a density-functional theory based Korringa-Kohn-Rostoker code are described and subsequently verified by studying copper as a test system. We model the thermal expansion by fitting a Debye-Grüneisen curve to experimental data. Both the electronic and vibrational structures are discussed for different temperatures, and employing a Wannier interpolation of these quantities we evaluate the scattering time by integrating the electron linewidth on a triangulation of the Fermi surface. Based thereupon, the temperature-dependent resistivity is calculated and found to be in good agreement with experiment. We show that the effect of thermal expansion has to be considered in the whole calculation regime. Further, for low temperatures, an accurate sampling of the Fermi surface becomes important.
Solute effect on basal and prismatic slip systems of Mg.
Moitra, Amitava; Kim, Seong-Gon; Horstemeyer, M F
2014-11-05
In an effort to design novel magnesium (Mg) alloys with high ductility, we present a first principles data based on the Density Functional Theory (DFT). The DFT was employed to calculate the generalized stacking fault energy curves, which can be used in the generalized Peierls-Nabarro (PN) model to study the energetics of basal slip and prismatic slip in Mg with and without solutes to calculate continuum scale dislocation core widths, stacking fault widths and Peierls stresses. The generalized stacking fault energy curves for pure Mg agreed well with other DFT calculations. Solute effects on these curves were calculated for nine alloying elements, namely Al, Ca, Ce, Gd, Li, Si, Sn, Zn and Zr, which allowed the strength and ductility to be qualitatively estimated based on the basal dislocation properties. Based on our multiscale methodology, a suggestion has been made to improve Mg formability.
Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Marinica, Mihai-Cosmin
2018-03-01
The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.
Collision for Li++He System. I. Potential Curves and Non-Adiabatic Coupling Matrix Elements
NASA Astrophysics Data System (ADS)
Yoshida, Junichi; O-Ohata, Kiyosi
1984-02-01
The potential curves and the non-adiabatic coupling matrix elements for the Li++He collision system were computed. The SCF molecular orbitals were constructed with the CGTO atomic bases centered on each nucleus and the center of mass of two nuclei. The SCF and CI calculations were done at various internuclear distances in the range of 0.1˜25.0 a.u. The potential energies and the wavefunctions were calculated with good approximation over whole internuclear distance. The non-adiabatic coupling matrix elements were calculated with the tentative method in which the ETF are approximately taken into account.
Counting the number of Feynman graphs in QCD
NASA Astrophysics Data System (ADS)
Kaneko, T.
2018-05-01
Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.
El Shahat, Khaled; El Saeid, Aziza; Attalla, Ehab; Yassin, Adel
2014-01-01
To achieve tumor control for radiotherapy, a dose distribution is planned which has a good chance of sterilizing all cancer cells without causing unacceptable normal tissue complications. The aim of the present study was to achieve an accurate calculation of dose for small field dimensions and perform this by evaluating the accuracy of planning system calculation. This will be compared with real measurement of dose for the same small field dimensions using different detectors. Practical work was performed in two steps: (i) determination of the physical factors required for dose estimation measured by three ionization chambers and calculated by treatment planning system (TPS) based on the latest technical report series (IAEATRS-398) and (ii) comparison of the calculated and measured data. Our data analysis for small field is irradiated by photon energy matched with the data obtained from the ionization chambers and the treatment planning system. Radiographic films were used as an additional detector for the obtained data and showed matching with TPS calculation. It can be concluded that studied small field dimensions were averaged 6% and 4% for 6 MV and 15 MV, respectively. Radiographic film measurements showed a variation in results within ±2% than TPS calculation.
Accurately Calculating the Solar Orientation of the TIANGONG-2 Ultraviolet Forward Spectrometer
NASA Astrophysics Data System (ADS)
Liu, Z.; Li, S.
2018-04-01
The Ultraviolet Forward Spectrometer is a new type of spectrometer for monitoring the vertical distribution of atmospheric trace gases in the global middle atmosphere. It is on the TianGong-2 space laboratory, which was launched on 15 September 2016. The spectrometer uses a solar calibration mode to modify its irradiance. Accurately calculating the solar orientation is a prerequisite of spectral calibration for the Ultraviolet Forward Spectrometer. In this paper, a method of calculating the solar orientation is proposed according to the imaging geometric characteristics of the spectrometer. Firstly, the solar orientation in the horizontal rectangular coordinate system is calculated based on the solar declination angle algorithm proposed by Bourges and the solar hour angle algorithm proposed by Lamm. Then, the solar orientation in the sensor coordinate system is achieved through several coordinate system transforms. Finally, we calculate the solar orientation in the sensor coordinate system and evaluate its calculation accuracy using actual orbital data of TianGong-2. The results show that the accuracy is close to the simulation method with STK (Satellite Tool Kit), and the error is not more than 2 %. The algorithm we present does not need a lot of astronomical knowledge, but only needs some observation parameters provided by TianGong-2.
SU-F-T-48: Clinical Implementation of Brachytherapy Planning System for COMS Eye Plaques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, C; Islam, M; Ahmad, S
Purpose: To commission the Brachytherapy Planning (BP) system (Varian, Palo Alto, CA) for the Collaborative Ocular Melanoma Study (COMS) eye plaques by evaluating dose differences against original plans from Nucletron Planning System (NPS). Methods: NPS system is the primary planning software for COMS-plaques at our facility; however, Brachytherapy Planning 11.0.47 (Varian Medical Systems) is used for secondary check and for seed placement configurations not originally commissioned. Dose comparisons of BP and NPS plans were performed for prescription of 8500 cGy at 5 mm depth and doses to normal structures: opposite retina, inner sclera, macula, optic disk and lens. Plans weremore » calculated for Iodine-125 seeds (OncoSeeds, Model 6711) using COMS-plaques of 10, 12, 14, 16, 18 and 20 mm diameters. An in-house program based on inverse-square was utilized to calculate point doses for comparison as well. Results: The highest dose difference between BP and NPS was 3.7% for the prescription point for all plaques. Doses for BP were higher than doses reported by NPS for all points. The largest percent differences for apex, opposite retina, inner sclera, macula, optic disk, and lens were 3.2%, 0.9%, 13.5%, 20.5%, 15.7% and 2.2%, respectively. The dose calculated by the in-house program was 1.3% higher at the prescription point, and were as high as 42.1%, for points away from the plaque (i.e. opposite retina) when compared to NPS. Conclusion: Doses to the tumor, lens, retina, and optic nerve are paramount for a successful treatment and vision preservation. Both systems are based on TG-43 calculations and assume water medium tissue homogeneity (ρe=1, water medium). Variations seen may result from the different task group versions and/or mathematical algorithms of the software. BP was commissioned to serve as a backup system and it also enables dose calculation in cases where seeds don’t follow conventional placement configuration.« less
Chevalier, Thérèse M.; Stewart, Garth; Nelson, Monty; McInerney, Robert J.; Brodie, Norman
2016-01-01
It has been well documented that IQ scores calculated using Canadian norms are generally 2–5 points lower than those calculated using American norms on the Wechsler IQ scales. However, recent findings have demonstrated that the difference may be significantly larger for individuals with certain demographic characteristics, and this has prompted discussion about the appropriateness of using the Canadian normative system with a clinical population in Canada. This study compared the interpretive effects of applying the American and Canadian normative systems in a clinical sample. We used a multivariate analysis of variance (ANOVA) to calculate differences between IQ and Index scores in a clinical sample, and mixed model ANOVAs to assess the pattern of differences across age and ability level. As expected, Full Scale IQ scores calculated using Canadian norms were systematically lower than those calculated using American norms, but differences were significantly larger for individuals classified as having extremely low or borderline intellectual functioning when compared with those who scored in the average range. Implications of clinically different conclusions for up to 52.8% of patients based on these discrepancies highlight a unique dilemma facing Canadian clinicians, and underscore the need for caution when choosing a normative system with which to interpret WAIS-IV results in the context of a neuropsychological test battery in Canada. Based on these findings, we offer guidelines for best practice for Canadian clinicians when interpreting data from neuropsychological test batteries that include different normative systems, and suggestions to assist with future test development. PMID:27246955
Barufka, Steffi; Heller, Michael; Prayon, Valeria; Fegert, Jörg M
2015-11-01
Despite substantial opposition in the practical field, based on an amendment to the Hospital Financing Act (KHG). the so-called PEPP-System was introduced in child and adolescent psychiatry as a new calculation model. The 2-year moratorium, combined with the rescheduling of the repeal of the psychiatry personnel regulation (Psych-PV) and a convergence phase, provided the German Federal Ministry of Health with additional time to enter a structured dialogue with professional associations. Especially the perspective concerning the regulatory framework is presently unclear. In light of this debate, this article provides calculations to illustrate the transformation of the previous personnel regulation into the PEPP-System by means of the data of §21 KHEntgG stemming from the 22 university hospitals of child and adolescent psychiatry and psychotherapy in Germany. In 2013 there was a total of 7,712 cases and 263,694 calculation days. In order to identify a necessary basic reimbursement value th1\\t would guarantee a constant quality of patient care, the authors utilize outcomes, cost structures, calculation days, and minute values for individual professional groups according to both systems (Psych-PV and PEPP) based on data from 2013 and the InEK' s analysis of the calculation datasets. The authors propose a normative agreement on the basic reimbursement value between 270 and 285 EUR. This takes into account the concentration phenomenon and the expansion of services that has occurred since the introduction of the Psych-PV system. Such a normative agreement on structural quality could provide a verifiable framework for the allocation of human resources corresponding to the previous regulations of Psych-PV.
Methane on Mars: Thermodynamic Equilibrium and Photochemical Calculations
NASA Technical Reports Server (NTRS)
Levine, J. S.; Summers, M. E.; Ewell, M.
2010-01-01
The detection of methane (CH4) in the atmosphere of Mars by Mars Express and Earth-based spectroscopy is very surprising, very puzzling, and very intriguing. On Earth, about 90% of atmospheric ozone is produced by living systems. A major question concerning methane on Mars is its origin - biological or geological. Thermodynamic equilibrium calculations indicated that methane cannot be produced by atmospheric chemical/photochemical reactions. Thermodynamic equilibrium calculations for three gases, methane, ammonia (NH3) and nitrous oxide (N2O) in the Earth s atmosphere are summarized in Table 1. The calculations indicate that these three gases should not exist in the Earth s atmosphere. Yet they do, with methane, ammonia and nitrous oxide enhanced 139, 50 and 12 orders of magnitude above their calculated thermodynamic equilibrium concentration due to the impact of life! Thermodynamic equilibrium calculations have been performed for the same three gases in the atmosphere of Mars based on the assumed composition of the Mars atmosphere shown in Table 2. The calculated thermodynamic equilibrium concentrations of the same three gases in the atmosphere of Mars is shown in Table 3. Clearly, based on thermodynamic equilibrium calculations, methane should not be present in the atmosphere of Mars, but it is in concentrations approaching 30 ppbv from three distinct regions on Mars.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-21
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra Nugraha; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei
2013-01-01
A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.
Vehicle security encryption based on unlicensed encryption
NASA Astrophysics Data System (ADS)
Huang, Haomin; Song, Jing; Xu, Zhijia; Ding, Xiaoke; Deng, Wei
2018-03-01
The current vehicle key is easy to be destroyed and damage, proposing the use of elliptical encryption algorithm is improving the reliability of vehicle security system. Based on the encryption rules of elliptic curve, the chip's framework and hardware structure are designed, then the chip calculation process simulation has been analyzed by software. The simulation has been achieved the expected target. Finally, some issues pointed out in the data calculation about the chip's storage control and other modules.
NASA Astrophysics Data System (ADS)
Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan
2017-12-01
Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.
NASA Astrophysics Data System (ADS)
Roy, Soumyajit; Chakraborty, G.; DasGupta, Anirvan
2018-02-01
The mutual interaction between a number of multi degrees of freedom mechanical systems moving with uniform speed along an infinite taut string supported by a viscoelastic layer has been studied using the substructure synthesis method when base excitations of a common frequency are given to the mechanical systems. The mobility or impedance matrices of the string have been calculated analytically by Fourier transform method as well as wave propagation technique. The above matrices are used to calculate the response of the discrete mechanical systems. Special attention is paid to the contact forces between the discrete and the continuous systems which are estimated by numerical simulation. The effects of phase difference, the distance between the systems and different base excitation amplitudes on the collective behaviour of the mechanical systems are also studied. The present study has relevance to the coupled dynamic problem of more than one railway pantographs and an overhead catenary system where the pantographs are modelled as discrete systems and the catenary is modelled as a taut string supported by continuous viscoelastic layer.
Berent, Jarosław
2007-01-01
This paper presents the new DNAStat version 1.2 for processing genetic profile databases and biostatistical calculations. This new version contains, besides all the options of its predecessor 1.0, a calculation-results file export option in .xls format for Microsoft Office Excel, as well as the option of importing/exporting the population base of systems as .txt files for processing in Microsoft Notepad or EditPad
Real-time acquisition and preprocessing system of transient electromagnetic data based on LabVIEW
NASA Astrophysics Data System (ADS)
Zhao, Huinan; Zhang, Shuang; Gu, Lingjia; Sun, Jian
2014-09-01
Transient electromagnetic method (TEM) is regarded as an everlasting issue for geological exploration. It is widely used in many research fields, such as mineral exploration, hydrogeology survey, engineering exploration and unexploded ordnance detection. The traditional measurement systems are often based on ARM DSP or FPGA, which have not real-time display, data preprocessing and data playback functions. In order to overcome the defects, a real-time data acquisition and preprocessing system based on LabVIEW virtual instrument development platform is proposed in the paper, moreover, a calibration model is established for TEM system based on a conductivity loop. The test results demonstrated that the system can complete real-time data acquisition and system calibration. For Transmit-Loop-Receive (TLR) response, the correlation coefficient between the measured results and the calculated results is 0.987. The measured results are basically consistent with the calculated results. Through the late inversion process for TLR, the signal of underground conductor was obtained. In the complex test environment, abnormal values usually exist in the measured data. In order to solve this problem, the judgment and revision algorithm of abnormal values is proposed in the paper. The test results proved that the proposed algorithm can effectively eliminate serious disturbance signals from the measured transient electromagnetic data.
Chemical Transformation System: Cloud Based ...
Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not contain the proprietary chemicals that environmental regulators must consider. We are building the Chemical Transformation System (CTS) to facilitate model parameterization and analysis. CTS integrates a number of physicochemical property calculators into the system including EPI Suite, SPARC, TEST and ChemAxon. The calculators are heterogeneous in their scientific methodologies, technology implementations and deployment stacks. CTS also includes a chemical transformation processing engine that has been loaded with reaction libraries for human biotransformation, abiotic reduction and abiotic hydrolysis. CTS implements a common interface for the disparate calculators accepting molecular identifiers (SMILES, IUPAC, CAS#, user-drawn molecule) before submission for processing. To make the system as accessible as possible and provide a consistent programmatic interface, we wrapped the calculators in a standardized RESTful Application Programming Interface (API) which makes it capable of servicing a much broader spectrum of clients without constraints to interoperability such as operating system or programming language. CTS is hosted in a shared cloud environment, the Quantitative Environmental
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Nagel, J A; Beck, C; Harms, H; Stiller, P; Guth, H; Stachs, O; Bretthauer, G
2010-12-01
Presbyopia and cataract are gaining more and more importance in the ageing society. Both age-related complaints are accompanied with a loss of the eye's ability to accommodate. A new approach to restore accommodation is the Artificial Accommodation System, an autonomous micro system, which will be implanted into the capsular bag instead of a rigid intraocular lens. The Artificial Accommodation System will, depending on the actual demand for accommodation, autonomously adapt the refractive power of its integrated optical element. One possibility to measure the demand for accommodation non-intrusively is to analyse eye movements. We present an efficient algorithm, based on the CORDIC technique, to calculate the demand for accommodation from magnetic field sensor data. It can be shown that specialised algorithms significantly shorten calculation time without violating precision requirements. Additionally, a communication strategy for the wireless exchange of sensor data between the implants of the left and right eye is introduced. The strategy allows for a one-sided calculation of the demand for accommodation, resulting in an overall reduction of calculation time by 50 %. The presented methods enable autonomous microsystems, such as the Artificial Accommodation System, to save significant amounts of energy, leading to extended autonomous run-times. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Kurihara, Osamu; Kim, Eunjoo; Kunishima, Naoaki; Tani, Kotaro; Ishikawa, Tetsuo; Furuyama, Kazuo; Hashimoto, Shozo; Akashi, Makoto
2017-09-01
A tool was developed to facilitate the calculation of the early internal doses to residents involved in the Fukushima Nuclear Disaster based on atmospheric transport and dispersion model (ATDM) simulations performed using Worldwide version of System for Prediction of Environmental Emergency Information 2nd version (WSPEEDI-II) together with personal behavior data containing the history of the whereabouts of individul's after the accident. The tool generates hourly-averaged air concentration data for the simulation grids nearest to an individual's whereabouts using WSPEEDI-II datasets for the subsequent calculation of internal doses due to inhalation. This paper presents an overview of the developed tool and provides tentative comparisons between direct measurement-based and ATDM-based results regarding the internal doses received by 421 persons from whom personal behavior data available.
The Financial Benefit of Early Flood Warnings in Europe
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Cloke, Hannah L.; Wetterhall, Fredrik; Parker, Dennis J.; Richardson, David; Thielen, Jutta
2015-04-01
Effective disaster risk management relies on science based solutions to close the gap between prevention and preparedness measures. The outcome of consultations on the UNIDSR post-2015 framework for disaster risk reduction highlight the need for cross-border early warning systems to strengthen the preparedness phases of disaster risk management in order to save people's lives and property and reduce the overall impact of severe events. In particular, continental and global scale flood forecasting systems provide vital information to various decision makers with which early warnings of floods can be made. Here the potential monetary benefits of early flood warnings using the example of the European Flood Awareness System (EFAS) are calculated based on pan-European Flood damage data and calculations of potential flood damage reductions. The benefits are of the order of 400 Euro for every 1 Euro invested. Because of the uncertainties which accompany the calculation, a large sensitivity analysis is performed in order to develop an envelope of possible financial benefits. Current EFAS system skill is compared against perfect forecasts to demonstrate the importance of further improving the skill of the forecasts. Improving the response to warnings is also essential in reaping the benefits of flood early warnings.
12 CFR 217.162 - Mechanics of risk-weighted asset calculation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 2 2014-01-01 2014-01-01 false Mechanics of risk-weighted asset calculation. 217.162 Section 217.162 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF GOVERNORS OF THE FEDERAL...-Based and Advanced Measurement Approaches Risk-Weighted Assets for Operational Risk § 217.162 Mechanics...
Adaptive real-time methodology for optimizing energy-efficient computing
Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA
2011-06-28
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qu, Weilu; Qiu, Weiting
2018-03-01
In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.
Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations
NASA Astrophysics Data System (ADS)
Yanovich, Denis
2018-02-01
The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability.
NASA Astrophysics Data System (ADS)
Mikhailov, S. Ia.; Tumatov, K. I.
The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
An Ab Initio and Kinetic Monte Carlo Simulation Study of Lithium Ion Diffusion on Graphene
Zhong, Kehua; Yang, Yanmin; Xu, Guigui; Zhang, Jian-Min; Huang, Zhigao
2017-01-01
The Li+ diffusion coefficients in Li+-adsorbed graphene systems were determined by combining first-principle calculations based on density functional theory with Kinetic Monte Carlo simulations. The calculated results indicate that the interactions between Li ions have a very important influence on lithium diffusion. Based on energy barriers directly obtained from first-principle calculations for single-Li+ and two-Li+ adsorbed systems, a new equation predicting energy barriers with more than two Li ions was deduced. Furthermore, it is found that the temperature dependence of Li+ diffusion coefficients fits well to the Arrhenius equation, rather than meeting the equation from electrochemical impedance spectroscopy applied to estimate experimental diffusion coefficients. Moreover, the calculated results also reveal that Li+ concentration dependence of diffusion coefficients roughly fits to the equation from electrochemical impedance spectroscopy in a low concentration region; however, it seriously deviates from the equation in a high concentration region. So, the equation from electrochemical impedance spectroscopy technique could not be simply used to estimate the Li+ diffusion coefficient for all Li+-adsorbed graphene systems with various Li+ concentrations. Our work suggests that interactions between Li ions, and among Li ion and host atoms will influence the Li+ diffusion, which determines that the Li+ intercalation dependence of Li+ diffusion coefficient should be changed and complex. PMID:28773122
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.
2018-03-01
The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.
Dosimetric evaluation of total marrow irradiation using 2 different planning systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nalichowski, Adrian, E-mail: nalichoa@karmanos.org; Eagle, Don G.; Burmeister, Jay
This study compared 2 different treatment planning systems (TPSs) for quality and efficiency of total marrow irradiation (TMI) plans. The TPSs used in this study were VOxel-Less Optimization (VoLO) (Accuray Inc, Sunnyvale, CA) using helical dose delivery on a Tomotherapy Hi-Art treatment unit and Eclipse (Varian Medical Systems Inc, Palo Alto, CA) using volumetric modulated arc therapy (VMAT) dose delivery on a Varian iX treatment unit. A total dose of 1200 cGy was prescribed to cover 95% of the planning target volume (PTV). The plans were optimized and calculated based on a single CT data and structure set using themore » Alderson Rando phantom (The Phantom Laboratory, Salem, NY) and physician contoured target and organ at risk (OAR) volumes. The OARs were lungs, heart, liver, kidneys, brain, and small bowel. The plans were evaluated based on plan quality, time to optimize the plan and calculate the dose, and beam on time. The resulting mean and maximum doses to the PTV were 1268 and 1465 cGy for VoLO and 1284 and 1541 cGy for Eclipse, respectively. For 5 of 6 OAR structures the VoLO system achieved lower mean and D10 doses ranging from 22% to 52% and 3% to 44%, respectively. Total computational time including only optimization and dose calculation were 0.9 hours for VoLO and 3.8 hours for Eclipse. These times do not include user-dependent target delineation and field setup. Both planning systems are capable of creating high-quality plans for total marrow irradiation. The VoLO planning system was able to achieve more uniform dose distribution throughout the target volume and steeper dose fall off, resulting in superior OAR sparing. VoLO's graphics processing unit (GPU)–based optimization and dose calculation algorithm also allowed much faster creation of TMI plans.« less
NASA Astrophysics Data System (ADS)
Wu, Di; Kofke, David A.
2005-08-01
We consider ways to quantify the overlap of the parts of phase space important to two systems, labeled A and B. Of interest is how much of the A-important phase space lies in that important to B, and how much of B lies in A. Two measures are proposed. The first considers four total-energy distributions, formed from all combinations made by tabulating either the A-system or the B-system energy when sampling either the A or B system. Measures for A in B and B in A are given by two overlap integrals defined on pairs of these distributions. The second measure is based on information theory, and defines two relative entropies which are conveniently expressed in terms of the dissipated work for free-energy perturbation (FEP) calculations in the A →B and B →A directions, respectively. Phase-space overlap is an important consideration in the performance of free-energy calculations. To demonstrate this connection, we examine bias in FEP calculations applied to a system of independent particles in a harmonic potential. Systems are selected to represent a range of overlap situations, including extreme subset, subset, partial overlap, and nonoverlap. The magnitude and symmetry of the bias (A →B vs B →A) are shown to correlate well with the overlap, and consequently with the overlap measures. The relative entropies are used to scale the amount of sampling to obtain a universal bias curve. This result leads to develop a simple heuristic that can be applied to determine whether a work-based free-energy measurement is free of bias. The heuristic is based in part on the measured free energy, but we argue that it is fail-safe inasmuch as any bias in the measurement will not promote a false indication of accuracy.
NASA Astrophysics Data System (ADS)
Kutai-Asis, Hofit; Barbiro-Michaely, Efrat; Deutsch, Assaf; Mayevsky, Avraham
2006-02-01
In our previous publication (Mayevsky et al SPIE 5326: 98-105, 2004) we described a multiparametric fiber optic system enabling the evaluation of 4 physiological parameters as indicators of tissue vitality. Since the correlation between the various parameters may differ in various pathophysiological conditions there is a need for an objective quantitative index that will integrate the relative changes measured in real time by the multiparametric monitoring system into a single number-vitality index. Such an approach to calculate tissue vitality index is critical for the possibility to use such an instrument in clinical environments. In the current presentation we are reporting our preliminary results indicating that calculation of an objective tissue vitality index is feasible. We used an intuitive empirical approach based on the comparison between the calculated index by the computer and the subjective evaluation made by an expert in the field of physiological monitoring. We used the in vivo brain of rats as an animal model in our current studies. The rats were exposed to anoxia, ischemia and cortical spreading depression and the responses were recorded in real time. At the end of the monitoring session the results were analyzed and the tissue vitality index was calculated offline. Mitochondrial NADH, tissue blood flow and oxy-hemoglobin were used to calculate the vitality index of the brain in vivo, where each parameter received a different weight, in each experiment type based on their significance. It was found that the mitochondrial NADH response was the main factor affected the calculated vitality index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less
A mathematical procedure to predict optical performance of CPCs
NASA Astrophysics Data System (ADS)
Yu, Y. M.; Yu, M. J.; Tang, R. S.
2016-08-01
To evaluate the optical performance of a CPC based concentrating photovoltaic system, it is essential to find the angular dependence of optical efficiency of compound parabolic concentrator (CPC-θe ) where the incident angle of solar rays on solar cells is restricted within θe for the radiation over its acceptance angle. In this work, a mathematical procedure was developed to calculate the optical efficiency of CPC-θe for radiation incident at any angle based radiation transfer within CPC-θe . Calculations show that, given the acceptance half-angle (θa ), the annual radiation of full CPC-θe increases with the increase of θe and the CPC without restriction of exit angle (CPC-90) annually collects the most radiation due to large geometry (Ct ); whereas for truncated CPCs with identical θa and Ct , the annual radiation collected by CPC-θe is almost identical to that by CPC-90, even slightly higher. Calculations also indicate that the annual radiation on the absorber of CPC-θe at the angle larger than θe decrease with the increase of θe but always less than that of CPC-90, and this implies that the CPC-θe based PV system is more efficient than CPC-90 based PV system because the radiation on solar cells incident at large angle is poorly converted into electricity.
NASA Astrophysics Data System (ADS)
Cheng, Fen; Hu, Wanxin
2017-05-01
Based on analysis of the impact of the experience of parking policy at home and abroad, design the impact analysis process of parking strategy. First, using group decision theory to create a parking strategy index system and calculate its weight. Index system includes government, parking operators and travelers. Then, use a multi-level extension theory to analyze the CBD parking strategy. Assess the parking strategy by calculating the correlation of each indicator. Finally, assess the strategy of parking charges through a case. Provide a scientific and reasonable basis for assessing parking strategy. The results showed that the model can effectively analyze multi-target, multi-property parking policy evaluation.
Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions
NASA Astrophysics Data System (ADS)
Ufimtcev, E. M.
2017-11-01
The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
IOL calculation using paraxial matrix optics.
Haigis, Wolfgang
2009-07-01
Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.
García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M
2018-01-01
Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.
Systemic errors calibration in dynamic stitching interferometry
NASA Astrophysics Data System (ADS)
Wu, Xin; Qi, Te; Yu, Yingjie; Zhang, Linna
2016-05-01
The systemic error is the main error sauce in sub-aperture stitching calculation. In this paper, a systemic error calibration method is proposed based on pseudo shearing. This method is suitable in dynamic stitching interferometry for large optical plane. The feasibility is vibrated by some simulations and experiments.
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-01-01
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerboth, Matthew D.; Setyawan, Wahyu; Henager, Charles H.
2014-01-07
A method is established and validated using molecular dynamics (MD) to determine the displacement threshold energies as Ed in nanolayered, multilayered systems of dissimilar metals. The method is applied to specifically oriented nanolayered films of Al-Ti where the crystal structure and interface orientations are varied in atomic models and Ed is calculated. Methods for defect detection are developed and discussed based on prior research in the literature and based on specific crystallographic directions available in the nanolayered systems. These are compared and contrasted to similar calculations in corresponding bulk materials, including fcc Al, fcc Ti, hcp Al, and hcp Ti.more » In all cases, the calculated Ed in the multilayers are intermediate to the corresponding bulk values but exhibit some important directionality. In the nanolayer, defect detection demonstrated systematic differences in the behavior of Ed in each layer. Importantly, collision cascade damage exhibits significant defect partitioning within the Al and Ti layers that is hypothesized to be an intrinsic property of dissimilar nanolayered systems. This type of partitioning could be partly responsible for observed asymmetric radiation damage responses in many multilayered systems. In addition, a pseudo-random direction was introduced to approximate the average Ed without performing numerous simulations with random directions.« less
NASA Astrophysics Data System (ADS)
Lan, G.; Jiang, J.; Li, D. D.; Yi, W. S.; Zhao, Z.; Nie, L. N.
2013-12-01
The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system.
NASA Astrophysics Data System (ADS)
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
Oka, M; Kamisaka, H; Fukumura, T; Hasegawa, T
2015-11-21
The oxygen ionic conduction in ZrO2 systems under tensile epitaxial strain was investigated by performing ab initio molecular dynamics (MD) calculations based on density functional theory (DFT) to elucidate the essential factors in the colossal ionic conductivity observed in the yttria stabilized ZrO2 (YSZ)/SrTiO3 heterostructure. Three factors were evaluated: lattice strain, oxygen vacancies, and dopants. Phonon calculations based on density functional perturbation theory (DFPT) were used to obtain the most stable structure for nondoped ZrO2 under 7% tensile strain along the a- and b-axes. This structure has the space group Pbcn, which is entirely different from that of cubic ZrO2, suggesting that previous ab initio MD calculations assuming cubic ZrO2 may have overestimated the ionic conductivity due to relaxation from the initial structure to the stable structure (Pbcn). Our MD calculations revealed that the ionic conductivity is enhanced only when tensile strain and oxygen vacancies are incorporated, although the presently obtained diffusion constant is far below the range for the colossal ionic conduction experimentally observed. The enhanced ionic conductivity is due to the combined effects of oxygen sublattice formation induced by strain and deformation of this sublattice by oxygen vacancies.
49 CFR 1242.79 - Communication systems operations (account XX-55-77).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 9 2010-10-01 2010-10-01 false Communication systems operations (account XX-55-77...-Transportation § 1242.79 Communication systems operations (account XX-55-77). Separate common expenses on bases of the percentages calculated for the separation of Communication Systems (account XX-19-20), § 1242...
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Wang, Hongyuan; Lin, Jun; Guan, Shanshan; Feng, Xue; Li, Suyi
2014-12-01
Performance testing and calibration of airborne transient electromagnetic (ATEM) systems are conducted to obtain the electromagnetic response of ground loops. It is necessary to accurately calculate the mutual inductance between transmitting coils, receiving coils and ground loops to compute the electromagnetic responses. Therefore, based on Neumann's formula and the measured attitudes of the coils, this study deduces the formula for the mutual inductance calculation between circular and quadrilateral coils, circular and circular coils, and quadrilateral and quadrilateral coils using a rotation matrix, and then proposes a method to calculate the mutual inductance between two coils at arbitrary attitudes (roll, pitch, and yaw). Using coil attitude simulated data of an ATEM system, we calculate the mutual inductance of transmitting coils and ground loops at different attitudes, analyze the impact of coil attitudes on mutual inductance, and compare the computational accuracy and speed of the proposed method with those of other methods using the same data. The results show that the relative error of the calculation is smaller and that the speed-up is significant compared to other methods. Moreover, the proposed method is also applicable to the mutual inductance calculation of polygonal and circular coils at arbitrary attitudes and is highly expandable.
Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R
2015-04-14
We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.
Phase stability of transition metals and alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hixson, R.S.; Schiferl, D.; Wills, J.M.
1997-06-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project was focused on resolving unexplained differences in calculated and measured phase transition pressures in transition metals. Part of the approach was to do new, higher accuracy calculations of transmission pressures for group 4B and group 6B metals. Theory indicates that the transition pressures for these baseline metals should change if alloyed with a d-electron donor metal, and calculations done using the Local Density Approximation (LDA) and the Virtual Crystal Approximation (VCA) indicate that this is true. Alloymore » systems were calculated for Ti, Zr and Hf based alloys with various solute concentrations. The second part of the program was to do new Diamond Anvil Cell (DAC) measurements to experimentally verify calculational results. Alloys were prepared for these systems with grain size suitable for Diamond Anvil Cell experiments. Experiments were done on pure Ti as well as Ti-V and Ti-Ta alloys. Measuring unambiguous transition pressures for these systems proved difficult, but a new technique developed yielded good results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig
2016-05-14
We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.
2010-12-15
The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less
Monte Carlo-based QA for IMRT of head and neck cancers
NASA Astrophysics Data System (ADS)
Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.
2007-06-01
It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal mucosa for some treatments with heavily-weighted anterior fields.
Thermal management of batteries
NASA Astrophysics Data System (ADS)
Gibbard, H. F.; Chen, C.-C.
Control of the internal temperature during high rate discharge or charge can be a major design problem for large, high energy density battery systems. A systematic approach to the thermal management of such systems is described for different load profiles based on: thermodynamic calculations of internal heat generation; calorimetric measurements of heat flux; analytical and finite difference calculations of the internal temperature distribution; appropriate system designs for heat removal and temperature control. Examples are presented of thermal studies on large lead-acid batteries for electrical utility load levelling and nickel-zinc and lithium-iron sulphide batteries for electric vehicle propulsion.
Kussmann, Jörg; Ochsenfeld, Christian
2007-11-28
A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.
Fuin, Niccolo; Pedemonte, Stefano; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F
2014-03-01
System designs in single photon emission tomography (SPECT) can be evaluated based on the fundamental trade-off between bias and variance that can be achieved in the reconstruction of emission tomograms. This trade off can be derived analytically using the Cramer-Rao type bounds, which imply the calculation and the inversion of the Fisher information matrix (FIM). The inverse of the FIM expresses the uncertainty associated to the tomogram, enabling the comparison of system designs. However, computing, storing and inverting the FIM is not practical with 3-D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM, a method based on the calculation of the local impulse response and the variance, in a single point, from a single row of the FIM, has been previously proposed for system design. However this approximation (circulant approximation) does not capture the global interdependence between the variables in shift-variant systems such as SPECT, and cannot account e.g., for data truncation or missing data. Our new formulation relies on subsampling the FIM. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. Every element of the FIM at the grid points is calculated exactly, accounting for the acquisition geometry and for the object. This new formulation reduces the computational complexity in estimating the uncertainty, but nevertheless accounts for the global interdependence between the variables, enabling the exploration of design spaces hindered by the circulant approximation. The graphics processing unit accelerated implementation of the algorithm reduces further the computation times, making the algorithm a good candidate for real-time optimization of adaptive imaging systems. This paper describes the subsampled FIM formulation and implementation details. The advantages and limitations of the new approximation are explored, in comparison with the circulant approximation, in the context of design optimization of a parallel-hole collimator SPECT system and of an adaptive imaging system (similar to the commercially available D-SPECT).
Albedo impact on the suitability of biochar systems to mitigate global warming.
Meyer, Sebastian; Bright, Ryan M; Fischer, Daniel; Schulz, Hardy; Glaser, Bruno
2012-11-20
Biochar application to agricultural soils can change the surface albedo which could counteract the climate mitigation benefit of biochar systems. However, the size of this impact has not yet been quantified. Based on empirical albedo measurements and literature data of arable soils mixed with biochar, a model for annual vegetation cover development based on satellite data and an assessment of the annual development of surface humidity, an average mean annual albedo reduction of 0.05 has been calculated for applying 30-32 Mg ha(-1) biochar on a test field near Bayreuth, Germany. The impact of biochar production and application on the carbon cycle and on the soil albedo was integrated into the greenhouse gas (GHG) balance of a modeled pyrolysis based biochar system via the computation of global warming potential (GWP) characterization factors. The analysis resulted in a reduction of the overall climate mitigation benefit of biochar systems by 13-22% due to the albedo change as compared to an analysis which disregards the albedo effect. Comparing the use of the same quantity of biomass in a biochar system to a bioenergy district heating system which replaces natural gas combustion, bioenergy heating systems achieve 99-119% of the climate benefit of biochar systems according to the model calculation.
Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I
2013-01-01
To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.
Zagórska, Agnieszka; Czopek, Anna; Pawłowski, Maciej; Dybała, Małgorzata; Siwek, Agata; Nowak, Gabriel
2012-11-01
Affinities of arylpiperazinylalkyl derivatives of imidazo[2,1-f]purine-2,4-dione and imidazolidine-2,4-dione for serotonin transporter and their acid-base properties were evaluated. The dissociation constant (pK(a)) of compounds 1-22 were determinated by potentiometric titration and calculated using pKalc 3.1 module of the Pallas system. The data from experimental methods and computational calculations were compared and suitable conclusions were reached.
Development of a Knowledge Base of Ti-Alloys From First-Principles and Thermodynamic Modeling
NASA Astrophysics Data System (ADS)
Marker, Cassie
An aging population with an active lifestyle requires the development of better load-bearing implants, which have high levels of biocompatibility and a low elastic modulus. Titanium alloys, in the body centered cubic phase, are great implant candidates, due to their mechanical properties and biocompatibility. The present work aims at investigating the thermodynamic and elastic properties of bcc Tialloys, using the integrated first-principles based on Density Functional Theory (DFT) and the CALculation of PHAse Diagrams (CALPHAD) method. The use of integrated first-principles calculations based on DFT and CALPHAD modeling has greatly reduced the need for trial and error metallurgy, which is ineffective and costly. The phase stability of Ti-alloys has been shown to greatly affect their elastic properties. Traditionally, CALPHAD modeling has been used to predict the equilibrium phase formation, but in the case of Ti-alloys, predicting the formation of two metastable phases o and alpha" is of great importance as these phases also drastically effect the elastic properties. To build a knowledge base of Ti-alloys, for biomedical load-bearing implants, the Ti-Mo-Nb-Sn-Ta-Zr system was studied because of the biocompatibility and the bcc stabilizing effects of some of the elements. With the focus on bcc Ti-rich alloys, a database of thermodynamic descriptions of each phase for the pure elements, binary and Ti-rich ternary alloys was developed in the present work. Previous thermodynamic descriptions for the pure elements were adopted from the widely used SGTE database for global compatibility. The previous binary and ternary models from the literature were evaluated for accuracy and new thermodynamic descriptions were developed when necessary. The models were evaluated using available experimental data, as well as the enthalpy of formation of the bcc phase obtained from first-principles calculations based on DFT. The thermodynamic descriptions were combined into a database ensuring that the sublattice models are compatible with each other. For subsystems, such as the Sn-Ta system, where no thermodynamic description had been evaluated and minimal experimental data was available, first-principles calculations based on DFT were used. The Sn-Ta system has two intermetallic phases, TaSn2 and Ta3Sn, with three solution phases: bcc, body centered tetragonal (bct) and diamond. First-principles calculations were completed on the intermetallic and solution phases. Special quasirandom structures (SQS) were used to obtain information about the solution phases across the entire composition range. The Debye-Gruneisen approach, as well as the quasiharmonic phonon method, were used to obtain the finite-temperature data. Results from the first-principles calculations and experiments were used to complete the thermodynamic description. The resulting phase diagram reproduced the first-principles calculations and experimental data accurately. In order to determine the effect of alloying on the elastic properties, first-principles calculations based on DFT were systematically done on the pure elements, five Ti-X binary systems and Ti-X-Y ternary systems (X ≠ Y = Mo, Nb, Sn, Ta Zr) in the bcc phase. The first-principles calculations predicted the single crystal elastic stiffness constants cij 's. Correspondingly, the polycrystalline aggregate properties were also estimated from the cij's, including bulk modulus B, shear modulus G and Young's modulus E. The calculated results showed good agreement with experimental results. The CALPHAD method was then adapted to assist in the database development of the elastic properties as a function of composition. On average, the database predicted the elastic properties of higher order Ti-alloys within 5 GPa of the experimental results. Finally, the formation of the metastable phases, o and alpha" was studied in the Ti-Ta and Ti-Nb systems. The formation energy of these phases, calculated from first-principles at 0 K, showed that the phases have similar formation energies to the bcc and hcp phases. Inelastic neutron scattering was completed on four different Ti-Nb compositions to study the entropy of the phases as well as the transformations occurring when the phases form and the phase fractions. Ongoing work is being done to use the experimental information to introduce thermodynamic descriptions for these two phases in the Ti-Nb system in order to be able to predict the formation and phase fractions. DFT based first-principles were used to predict the effect these phases have on the elastic properties and a rule of mixtures was used to determine the elastic properties of multi-phase alloys. The results were compared with experiments and showed that if the ongoing modeling can predict the phase fraction, the elastic database can accurately predict the elastic properties of the o and alpha" phases. This thesis provides a knowledge base of the thermodynamic and elastic properties of Ti-alloys from computational thermodynamics. The databases created will impact research activities on Ti-alloys and specifically efforts focused on Ti-alloys for biomedical applications.
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less
Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina
2018-01-01
Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.
Refractive laser beam shaping by means of a functional differential equation based design approach.
Duerr, Fabian; Thienpont, Hugo
2014-04-07
Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
NASA Astrophysics Data System (ADS)
Fradi, Aniss
The ability to allocate the active power (MW) loading on transmission lines and transformers, is the basis of the "flow based" transmission allocation system developed by the North American Electric Reliability Council. In such a system, the active power flows must be allocated to each line or transformer in proportion to the active power being transmitted by each transaction imposed on the system. Currently, this is accomplished through the use of the linear Power Transfer Distribution Factors (PTDFs). Unfortunately, no linear allocation models exist for other energy transmission quantities, such as MW and MVAR losses, MVAR and MVA flows, etc. Early allocation schemes were developed to allocate MW losses due to transactions to branches in a transmission system, however they exhibited diminished accuracy, since most of them are based on linear power flow modeling of the transmission system. This thesis presents a new methodology to calculate Energy Transaction Allocation factors (ETA factors, or eta factors), using the well-known process of integration of a first derivative function, as well as consistent and well-established mathematical and AC power flow models. The factors give a highly accurate allocation of any non-linear system quantity to transactions placed on the transmission system. The thesis also extends the new ETA factors calculation procedure to restructure a new economic dispatch scheme where multiple sets of generators are economically dispatched to meet their corresponding load and their share of the losses.
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
2018-03-20
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge
Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.
2010-01-01
A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.
Economic evaluation of a solar hot-water system--Palm Beach County, Florida
NASA Technical Reports Server (NTRS)
1981-01-01
Report projects solar-energy costs and savings for residential hot-water system over 20 year period. Evaluation uses technical and economic models with inputs based on working characteristics of installed system. Primary analysis permits calculation of economic viability for four other U.S. sites.
A Simulation and Modeling Framework for Space Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olivier, S S
This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellitemore » intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated.« less
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan
2014-07-01
Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
ERIC Educational Resources Information Center
Fleck, George
This publication was produced as a teaching tool for college chemistry. The book is a text for a computer-based unit on the chemistry of acid-base titrations, and is designed for use with FORTRAN or BASIC computer systems, and with a programmable electronic calculator, in a variety of educational settings. The text attempts to present computer…
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
The Research of Tax Text Categorization based on Rough Set
NASA Astrophysics Data System (ADS)
Liu, Bin; Xu, Guang; Xu, Qian; Zhang, Nan
To solve the problem of effective of categorization of text data in taxation system, the paper analyses the text data and the size calculation of key issues first, then designs text categorization based on rough set model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, JS; Fan, J; Ma, C-M
Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom.more » MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc.« less
The dark side of photovoltaic — 3D simulation of glare assessing risk and discomfort
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Thomas; Wollert, Alexander
2015-04-15
Photovoltaic (PV) systems form an important force in the implementation of renewable energies, but as we all know, the force has always its dark side. Besides efficiency considerations and discussions about architectures of power distribution networks, the increasing numbers of installations of PV systems for implementing renewable energies have secondary effects. PV systems can generate glare due to optical reflections and hence might be a serious concern. On the one hand, glare could affect safety, e.g. regarding traffic. On the other hand, glare is a constant source of discomfort in vicinities of PV systems. Hence, assessment of glare is decisivemore » for the success of renewable energies near municipalities and traffic zones for the success of solar power. Several courts decided on the change of PV systems and even on their de-installation because of glare effects. Thus, location-based assessments are required to limit potential reflections and to avoid risks for public infrastructure or discomfort of residents. The question arises on how to calculate reflections accurately according to the environment's topography. Our approach is founded in a 3D-based simulation methodology to calculate and visualize reflections based on the geometry of the environment of PV systems. This computational model is implemented by an interactive tool for simulation and visualization. Hence, project planners receive flexible assistance for adjusting the parameters of solar panels amid the planning process and in particular before the installation of a PV system. - Highlights: • Solar panels cause glare that impacts neighborhoods and traffic infrastructures. • Glare might cause disability and discomfort. • 3D environment for the calculation of glare • Interactive tool to simulate and visualize reflections • Impact assessment of solar power plant farms.« less
CDMBE: A Case Description Model Based on Evidence
Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing
2015-01-01
By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006
NASA Astrophysics Data System (ADS)
Cave, Robert J.; Newton, Marshall D.
1996-01-01
A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.
Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System
NASA Astrophysics Data System (ADS)
Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu
2017-05-01
This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.
Figure of merit studies of beam power concepts for advanced space exploration
NASA Technical Reports Server (NTRS)
Miller, Gabriel; Kadiramangalam, Murali N.
1990-01-01
Surface to surface, millimeter wavelength beam power systems for power transmission on the lunar base were investigated. Qualitative/quantitative analyses and technology assessment of 35, 110 and 140 GHz beam power systems were conducted. System characteristics including mass, stowage volume, cost and efficiency as a function of range and power level were calculated. A simple figure of merit analysis indicates that the 35 GHz system would be the preferred choice for lunar base applications, followed closely by the 110 GHz system. System parameters of a 35 GHz beam power system appropriate for power transmission on a recent lunar base concept studied by NASA-Johnson and the necessary deployment sequence are suggested.
DietPal: A Web-Based Dietary Menu-Generating and Management System
Abdullah, Siti Norulhuda; Shahar, Suzana; Abdul-Hamid, Helmi; Khairudin, Nurkahirizan; Yusoff, Mohamed; Ghazali, Rafidah; Mohd-Yusoff, Nooraini; Shafii, Nik Shanita; Abdul-Manaf, Zaharah
2004-01-01
Background Attempts in current health care practice to make health care more accessible, effective, and efficient through the use of information technology could include implementation of computer-based dietary menu generation. While several of such systems already exist, their focus is mainly to assist healthy individuals calculate their calorie intake and to help monitor the selection of menus based upon a prespecified calorie value. Although these prove to be helpful in some ways, they are not suitable for monitoring, planning, and managing patients' dietary needs and requirements. This paper presents a Web-based application that simulates the process of menu suggestions according to a standard practice employed by dietitians. Objective To model the workflow of dietitians and to develop, based on this workflow, a Web-based system for dietary menu generation and management. The system is aimed to be used by dietitians or by medical professionals of health centers in rural areas where there are no designated qualified dietitians. Methods First, a user-needs study was conducted among dietitians in Malaysia. The first survey of 93 dietitians (with 52 responding) was an assessment of information needed for dietary management and evaluation of compliance towards a dietary regime. The second study consisted of ethnographic observation and semi-structured interviews with 14 dietitians in order to identify the workflow of a menu-suggestion process. We subsequently designed and developed a Web-based dietary menu generation and management system called DietPal. DietPal has the capability of automatically calculating the nutrient and calorie intake of each patient based on the dietary recall as well as generating suitable diet and menu plans according to the calorie and nutrient requirement of the patient, calculated from anthropometric measurements. The system also allows reusing stored or predefined menus for other patients with similar health and nutrient requirements. Results We modeled the workflow of menu-suggestion activity currently adhered to by dietitians in Malaysia. Based on this workflow, a Web-based system was developed. Initial post evaluation among 10 dietitians indicates that they are comfortable with the organization of the modules and information. Conclusions The system has the potential of enhancing the quality of services with the provision of standard and healthy menu plans and at the same time increasing outreach, particularly to rural areas. With its potential capability of optimizing the time spent by dietitians to plan suitable menus, more quality time could be spent delivering nutrition education to the patients. PMID:15111270
DietPal: a Web-based dietary menu-generating and management system.
Noah, Shahrul A; Abdullah, Siti Norulhuda; Shahar, Suzana; Abdul-Hamid, Helmi; Khairudin, Nurkahirizan; Yusoff, Mohamed; Ghazali, Rafidah; Mohd-Yusoff, Nooraini; Shafii, Nik Shanita; Abdul-Manaf, Zaharah
2004-01-30
Attempts in current health care practice to make health care more accessible, effective, and efficient through the use of information technology could include implementation of computer-based dietary menu generation. While several of such systems already exist, their focus is mainly to assist healthy individuals calculate their calorie intake and to help monitor the selection of menus based upon a prespecified calorie value. Although these prove to be helpful in some ways, they are not suitable for monitoring, planning, and managing patients' dietary needs and requirements. This paper presents a Web-based application that simulates the process of menu suggestions according to a standard practice employed by dietitians. To model the workflow of dietitians and to develop, based on this workflow, a Web-based system for dietary menu generation and management. The system is aimed to be used by dietitians or by medical professionals of health centers in rural areas where there are no designated qualified dietitians. First, a user-needs study was conducted among dietitians in Malaysia. The first survey of 93 dietitians (with 52 responding) was an assessment of information needed for dietary management and evaluation of compliance towards a dietary regime. The second study consisted of ethnographic observation and semi-structured interviews with 14 dietitians in order to identify the workflow of a menu-suggestion process. We subsequently designed and developed a Web-based dietary menu generation and management system called DietPal. DietPal has the capability of automatically calculating the nutrient and calorie intake of each patient based on the dietary recall as well as generating suitable diet and menu plans according to the calorie and nutrient requirement of the patient, calculated from anthropometric measurements. The system also allows reusing stored or predefined menus for other patients with similar health and nutrient requirements. We modeled the workflow of menu-suggestion activity currently adhered to by dietitians in Malaysia. Based on this workflow, a Web-based system was developed. Initial post evaluation among 10 dietitians indicates that they are comfortable with the organization of the modules and information. The system has the potential of enhancing the quality of services with the provision of standard and healthy menu plans and at the same time increasing outreach, particularly to rural areas. With its potential capability of optimizing the time spent by dietitians to plan suitable menus, more quality time could be spent delivering nutrition education to the patients.
System and method for knowledge based matching of users in a network
Verspoor, Cornelia Maria [Santa Fe, NM; Sims, Benjamin Hayden [Los Alamos, NM; Ambrosiano, John Joseph [Los Alamos, NM; Cleland, Timothy James [Los Alamos, NM
2011-04-26
A knowledge-based system and methods to matchmaking and social network extension are disclosed. The system is configured to allow users to specify knowledge profiles, which are collections of concepts that indicate a certain topic or area of interest selected from an. The system utilizes the knowledge model as the semantic space within which to compare similarities in user interests. The knowledge model is hierarchical so that indications of interest in specific concepts automatically imply interest in more general concept. Similarity measures between profiles may then be calculated based on suitable distance formulas within this space.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
Semiclassical Path Integral Calculation of Nonlinear Optical Spectroscopy.
Provazza, Justin; Segatta, Francesco; Garavelli, Marco; Coker, David F
2018-02-13
Computation of nonlinear optical response functions allows for an in-depth connection between theory and experiment. Experimentally recorded spectra provide a high density of information, but to objectively disentangle overlapping signals and to reach a detailed and reliable understanding of the system dynamics, measurements must be integrated with theoretical approaches. Here, we present a new, highly accurate and efficient trajectory-based semiclassical path integral method for computing higher order nonlinear optical response functions for non-Markovian open quantum systems. The approach is, in principle, applicable to general Hamiltonians and does not require any restrictions on the form of the intrasystem or system-bath couplings. This method is systematically improvable and is shown to be valid in parameter regimes where perturbation theory-based methods qualitatively breakdown. As a test of the methodology presented here, we study a system-bath model for a coupled dimer for which we compare against numerically exact results and standard approximate perturbation theory-based calculations. Additionally, we study a monomer with discrete vibronic states that serves as the starting point for future investigation of vibronic signatures in nonlinear electronic spectroscopy.
NASA Astrophysics Data System (ADS)
Gao, Michael C.; Ünlü, Necip; Mihalkovic, Marek; Widom, Michael; Shiflet, G. J.
2007-10-01
This study investigates glass formation, phase equilibria, and thermodynamic descriptions of the Al-rich Al-Ce-Co ternary system using a novel approach that combines critical experiments, CALPHAD modeling, and first-principles (FP) calculations. The glass formation range (GFR) and a partial 500 °C isotherm are determined using a range of experimental techniques including melt spinning, transmission electron microscopy (TEM), electron probe microanalysis (EPMA), X-ray diffraction, and differential thermal analysis (DTA). Three stable ternary phases are confirmed, namely, Al8CeCo2, Al4CeCo, and AlCeCo, while a metastable phase, Al5CeCo2, was discovered. The equilibrium and metastable phases identified by the present and earlier reported experiments, together with many hypothetical ternary compounds, are further studied by FP calculations. Based on new experimental data and FP calculations, the thermodynamics of the Al-rich Al-Co-Ce system is optimized using the CALPHAD method. Application to glass formation is discussed in light of present studies.
Numerical modelling of series-parallel cooling systems in power plant
NASA Astrophysics Data System (ADS)
Regucki, Paweł; Lewkowicz, Marek; Kucięba, Małgorzata
2017-11-01
The paper presents a mathematical model allowing one to study series-parallel hydraulic systems like, e.g., the cooling system of a power boiler's auxiliary devices or a closed cooling system including condensers and cooling towers. The analytical approach is based on a set of non-linear algebraic equations solved using numerical techniques. As a result of the iterative process, a set of volumetric flow rates of water through all the branches of the investigated hydraulic system is obtained. The calculations indicate the influence of changes in the pipeline's geometrical parameters on the total cooling water flow rate in the analysed installation. Such an approach makes it possible to analyse different variants of the modernization of the studied systems, as well as allowing for the indication of its critical elements. Basing on these results, an investor can choose the optimal variant of the reconstruction of the installation from the economic point of view. As examples of such a calculation, two hydraulic installations are described. One is a boiler auxiliary cooling installation including two screw ash coolers. The other is a closed cooling system consisting of cooling towers and condensers.
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
First principles calculation of two dimensional antimony and antimony arsenide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pillai, Sharad Babu, E-mail: sbpillai001@gmail.com; Narayan, Som; Jha, Prafulla K.
2016-05-23
This work focuses on the strain dependence of the electronic properties of two dimensional antimony (Sb) material and its alloy with As (SbAs) using density functional theory based first principles calculations. Both systems show indirect bandgap semiconducting character which can be transformed into a direct bandgap material with the application of relatively small strain.
The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor
ERIC Educational Resources Information Center
Gordon, James; Chancey, Katherine
2005-01-01
The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…
Radioactive waste disposal fees-Methodology for calculation
NASA Astrophysics Data System (ADS)
Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich
2014-11-01
This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.
The Y2K Problem: Will It Just Be Another New Year's Eve?
ERIC Educational Resources Information Center
Iwanowski, Jay
1998-01-01
Potential problems for college and university computing functions posed by arrival of the year 2000 (Y2K) are discussed, including arithmetic calculations and sorting functions based on two-digit year dates, embedding of two-digit dates in archival data, system coordination for data exchange, unique number generation, and leap year calculations. A…
Cost Analysis of MRI Services in Iran: An Application of Activity Based Costing Technique
Bayati, Mohsen; Mahboub Ahari, Alireza; Badakhshan, Abbas; Gholipour, Mahin; Joulaei, Hassan
2015-01-01
Background: Considerable development of MRI technology in diagnostic imaging, high cost of MRI technology and controversial issues concerning official charges (tariffs) have been the main motivations to define and implement this study. Objectives: The present study aimed to calculate the unit-cost of MRI services using activity-based costing (ABC) as a modern cost accounting system and to fairly compare calculated unit-costs with official charges (tariffs). Materials and Methods: We included both direct and indirect costs of MRI services delivered in fiscal year 2011 in Shiraz Shahid Faghihi hospital. Direct allocation method was used for distribution of overhead costs. We used micro-costing approach to calculate unit-cost of all different MRI services. Clinical cost data were retrieved from the hospital registering system. Straight-line method was used for depreciation cost estimation. To cope with uncertainty and to increase the robustness of study results, unit costs of 33 MRI services was calculated in terms of two scenarios. Results: Total annual cost of MRI activity center (AC) was calculated at USD 400,746 and USD 532,104 based on first and second scenarios, respectively. Ten percent of the total cost was allocated from supportive departments. The annual variable costs of MRI center were calculated at USD 295,904. Capital costs measured at USD 104,842 and USD 236, 200 resulted from the first and second scenario, respectively. Existing tariffs for more than half of MRI services were above the calculated costs. Conclusion: As a public hospital, there are considerable limitations in both financial and administrative databases of Shahid Faghihi hospital. Labor cost has the greatest share of total annual cost of Shahid Faghihi hospital. The gap between unit costs and tariffs implies that the claim for extra budget from health providers may not be relevant for all services delivered by the studied MRI center. With some adjustments, ABC could be implemented in MRI centers. With the settlement of a reliable cost accounting system such as ABC technique, hospitals would be able to generate robust evidences for financial management of their overhead, intermediate and final ACs. PMID:26715979
Cost Analysis of MRI Services in Iran: An Application of Activity Based Costing Technique.
Bayati, Mohsen; Mahboub Ahari, Alireza; Badakhshan, Abbas; Gholipour, Mahin; Joulaei, Hassan
2015-10-01
Considerable development of MRI technology in diagnostic imaging, high cost of MRI technology and controversial issues concerning official charges (tariffs) have been the main motivations to define and implement this study. The present study aimed to calculate the unit-cost of MRI services using activity-based costing (ABC) as a modern cost accounting system and to fairly compare calculated unit-costs with official charges (tariffs). We included both direct and indirect costs of MRI services delivered in fiscal year 2011 in Shiraz Shahid Faghihi hospital. Direct allocation method was used for distribution of overhead costs. We used micro-costing approach to calculate unit-cost of all different MRI services. Clinical cost data were retrieved from the hospital registering system. Straight-line method was used for depreciation cost estimation. To cope with uncertainty and to increase the robustness of study results, unit costs of 33 MRI services was calculated in terms of two scenarios. Total annual cost of MRI activity center (AC) was calculated at USD 400,746 and USD 532,104 based on first and second scenarios, respectively. Ten percent of the total cost was allocated from supportive departments. The annual variable costs of MRI center were calculated at USD 295,904. Capital costs measured at USD 104,842 and USD 236, 200 resulted from the first and second scenario, respectively. Existing tariffs for more than half of MRI services were above the calculated costs. As a public hospital, there are considerable limitations in both financial and administrative databases of Shahid Faghihi hospital. Labor cost has the greatest share of total annual cost of Shahid Faghihi hospital. The gap between unit costs and tariffs implies that the claim for extra budget from health providers may not be relevant for all services delivered by the studied MRI center. With some adjustments, ABC could be implemented in MRI centers. With the settlement of a reliable cost accounting system such as ABC technique, hospitals would be able to generate robust evidences for financial management of their overhead, intermediate and final ACs.
Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.
Yang, Delian; Wang, Qiang
2015-09-28
We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.
NASA Astrophysics Data System (ADS)
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
NASA Astrophysics Data System (ADS)
Liu, Xin; Lu, Hongbing; Chen, Hanyong; Zhao, Li; Shi, Zhengxing; Liang, Zhengrong
2009-02-01
Developmental dysplasia of the hip is a congenital hip joint malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Conventionally, physicians made diagnoses and treatments only based on findings from two-dimensional (2D) images by manually calculating clinic parameters. However, anatomical complexity of the disease and the limitation of current standard procedures make accurate diagnosis quite difficultly. In this study, we developed a system that provides quantitative measurement of 3D clinical indexes based on computed tomography (CT) images. To extract bone structure from surrounding tissues more accurately, the system firstly segments the bone using a knowledge-based fuzzy clustering method, which is formulated by modifying the objective function of the standard fuzzy c-means algorithm with additive adaptation penalty. The second part of the system calculates automatically the clinical indexes, which are extended from 2D to 3D for accurate description of spatial relationship between femurs and acetabulum. To evaluate the system performance, experimental study based on 22 patients with unilateral or bilateral affected hip was performed. The results of 3D acetabulum index (AI) automatically provided by the system were validated by comparison with 2D results measured by surgeons manually. The correlation between the two results was found to be 0.622 (p<0.01).
TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, J; Park, J; Kim, L
2016-06-15
Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume
NASA Astrophysics Data System (ADS)
Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.
2000-06-01
The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.
NASA Technical Reports Server (NTRS)
Monson, D. J.
1978-01-01
Based on expected advances in technology, the maximum system efficiency and minimum specific mass have been calculated for closed-cycle CO and CO2 electric-discharge lasers (EDL's) and a direct solar-pumped laser in space. The efficiency calculations take into account losses from excitation gas heating, ducting frictional and turning losses, and the compressor efficiency. The mass calculations include the power source, radiator, compressor, fluids, ducting, laser channel, optics, and heat exchanger for all of the systems; and in addition the power conditioner for the EDL's and a focusing mirror for the solar-pumped laser. The results show the major component masses in each system, show which is the lightest system, and provide the necessary criteria for solar-pumped lasers to be lighter than the EDL's. Finally, the masses are compared with results from other studies for a closed-cycle CO2 gasdynamic laser (GDL) and the proposed microwave satellite solar power station (SSPS).
Shiraogawa, Takafumi; Ehara, Masahiro; Jurinovich, Sandro; Cupellini, Lorenzo; Mennucci, Benedetta
2018-06-15
Recently, a method to calculate the absorption and circular dichroism (CD) spectra based on the exciton coupling has been developed. In this work, the method was utilized for the decomposition of the CD and circularly polarized luminescence (CPL) spectra of a multichromophoric system into chromophore contributions for recently developed through-space conjugated oligomers. The method which has been implemented using rotatory strength in the velocity form and therefore it is gauge-invariant, enables us to evaluate the contribution from each chromophoric unit and locally excited state to the CD and CPL spectra of the total system. The excitonic calculations suitably reproduce the full calculations of the system, as well as the experimental results. We demonstrate that the interactions between electric transition dipole moments of adjacent chromophoric units are crucial in the CD and CPL spectra of the multichromophoric systems, while the interactions between electric and magnetic transition dipole moments are not negligible. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
An expert system for the design of heating, ventilating, and air-conditioning systems
NASA Astrophysics Data System (ADS)
Camejo, Pedro Jose
1989-12-01
Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are needed and have been developed to join the separate knowledge bases into one simple-to-use program unit.
Utilization-Based Modeling and Optimization for Cognitive Radio Networks
NASA Astrophysics Data System (ADS)
Liu, Yanbing; Huang, Jun; Liu, Zhangxiong
The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.
Study on combat effectiveness of air defense missile weapon system based on queuing theory
NASA Astrophysics Data System (ADS)
Zhao, Z. Q.; Hao, J. X.; Li, L. J.
2017-01-01
Queuing Theory is a method to analyze the combat effectiveness of air defense missile weapon system. The model of service probability based on the queuing theory was constructed, and applied to analyzing the combat effectiveness of "Sidewinder" and "Tor-M1" air defense missile weapon system. Finally aimed at different targets densities, the combat effectiveness of different combat units of two types' defense missile weapon system is calculated. This method can be used to analyze the usefulness of air defense missile weapon system.
Rail inspection system based on iGPS
NASA Astrophysics Data System (ADS)
Fu, Xiaoyan; Wang, Mulan; Wen, Xiuping
2018-05-01
Track parameters include gauge, super elevation, cross level and so on, which could be calculated through the three-dimensional coordinates of the track. The rail inspection system based on iGPS (indoor/infrared GPS) was composed of base station, receiver, rail inspection frame, wireless communication unit, display and control unit and data processing unit. With the continuous movement of the inspection frame, the system could accurately inspect the coordinates of rail; realize the intelligent detection and precision measurement. According to principle of angle intersection measurement, the inspection model was structured, and detection process was given.
Space-based augmentation for global navigation satellite systems.
Grewal, Mohinder S
2012-03-01
This paper describes space-based augmentation for global navigation satellite systems (GNSS). Space-based augmentations increase the accuracy and integrity of the GNSS, thereby enhancing users' safety. The corrections for ephemeris, ionospheric delay, and clocks are calculated from reference station measurements of GNSS data in wide-area master stations and broadcast via geostationary earth orbit (GEO) satellites. This paper discusses the clock models, satellite orbit determination, ionospheric delay estimation, multipath mitigation, and GEO uplink subsystem (GUS) as used in the Wide Area Augmentation System developed by the FAA.
Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method
NASA Astrophysics Data System (ADS)
Yuanyue, Yang; Huimin, Li
2018-02-01
Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy
NASA Astrophysics Data System (ADS)
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-01
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
Some computer graphical user interfaces in radiation therapy.
Chow, James C L
2016-03-28
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations.
Adaptive real-time methodology for optimizing energy-efficient computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Feng, Wu-Chun
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to eachmore » process running on a system.« less
NASA Astrophysics Data System (ADS)
Oberhofer, Harald; Blumberger, Jochen
2010-12-01
We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
Background for Joint Systems Aspects of AIR 6000
2000-04-01
Checkland’s Soft Systems Methodology [7, 8,9]. The analytical techniques that are proposed for joint systems work are based on calculating probability...Supporting Global Interests 21 DSTO-CR-0155 SLMP Structural Life Management Plan SOW Stand-Off Weapon SSM Soft Systems Methodology UAV Uninhabited Aerial... Systems Methodology in Action, John Wiley & Sons, Chichester, 1990. [101 Pearl, Judea, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Quantum chemical calculation of the equilibrium structures of small metal atom clusters
NASA Technical Reports Server (NTRS)
Kahn, L. R.
1982-01-01
Metal atom clusters are studied based on the application of ab initio quantum mechanical approaches. Because these large 'molecular' systems pose special practical computational problems in the application of the quantum mechanical methods, there is a special need to find simplifying techniques that do not compromise the reliability of the calculations. Research is therefore directed towards various aspects of the implementation of the effective core potential technique for the removal of the metal atom core electrons from the calculations.
Empfangsleistung in Abhängigkeit von der Zielentfernung bei optischen Kurzstrecken-Radargeräten.
Riegl, J; Bernhard, M
1974-04-01
The dependence of the received optical power on the range in optical short-distance radar range finders is calculated by means of the methods of geometrical optics. The calculations are based on a constant intensity of the transmitter-beam cross section and on an ideal thin lens for the receiver optics. The results are confirmed by measurements. Even measurements using a nonideal thick lens system for the receiver optics are in reasonable agreement with the calculations.
Electronic structure calculation by nonlinear optimization: Application to metals
NASA Astrophysics Data System (ADS)
Benedek, R.; Min, B. I.; Woodward, C.; Garner, J.
1988-04-01
There is considerable interest in the development of novel algorithms for the calculation of electronic structure (e.g., at the level of the local-density approximation of density-functional theory). In this paper we consider a first-order equation-of-motion method. Two methods of solution are described, one proposed by Williams and Soler, and the other base on a Born-Dyson series expansion. The extension of the approach to metallic systems is outlined and preliminary numerical calculations for Zintl-phase NaTl are presented.
Net carbon flux in organic and conventional olive production systems
NASA Astrophysics Data System (ADS)
Saeid Mohamad, Ramez; Verrastro, Vincenzo; Bitar, Lina Al; Roma, Rocco; Moretti, Michele; Chami, Ziad Al
2014-05-01
Agricultural systems are considered as one of the most relevant sources of atmospheric carbon. However, agriculture has the potentiality to mitigate carbon dioxide mainly through soil carbon sequestration. Some agricultural practices, particularly fertilization and soil management, can play a dual role in the agricultural systems regarding the carbon cycle contributing to the emissions and to the sequestration process in the soil. Good soil and input managements affect positively Soil Organic Carbon (SOC) changes and consequently the carbon cycle. The present study aimed at comparing the carbon footprint of organic and conventional olive systems and to link it to the efficiency of both systems on carbon sequestration by calculating the net carbon flux. Data were collected at farm level through a specific and detailed questionnaire based on one hectare as a functional unit and a system boundary limited to olive production. Using LCA databases particularly ecoinvent one, IPCC GWP 100a impact assessment method was used to calculate carbon emissions from agricultural practices of both systems. Soil organic carbon has been measured, at 0-30 cm depth, based on soil analyses done at the IAMB laboratory and based on reference value of SOC, the annual change of SOC has been calculated. Substracting sequestrated carbon in the soil from the emitted on resulted in net carbon flux calculation. Results showed higher environmental impact of the organic system on Global Warming Potential (1.07 t CO2 eq. yr-1) comparing to 0.76 t CO2 eq. yr-1 in the conventional system due to the higher GHG emissions caused by manure fertilizers compared to the use of synthetic foliar fertilizers in the conventional system. However, manure was the main reason behind the higher SOC content and sequestration in the organic system. As a resultant, the organic system showed higher net carbon flux (-1.7 t C ha-1 yr-1 than -0.52 t C ha-1 yr-1 in the conventional system reflecting higher efficiency as a sink for atmospheric CO2 (the negative value of Net C flux indicates that a system is a net sink for atmospheric CO2). In conclusion, this study illustrates the importance of including soil carbon sequestration associated with CO2 emissions in the evaluation process between alternatives of agricultural systems. Thus, organic olive system offers an opportunity to increase carbon sequestration compared to the conventional one although it causes higher C emissions from manure fertilization. Keywords: Net carbon flux, GHG, organic, olive, soil organic carbon
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Multiradar tracking for theater missile defense
NASA Astrophysics Data System (ADS)
Sviestins, Egils
1995-09-01
A prototype system for tracking tactical ballistic missiles using multiple radars has been developed. The tracking is based on measurement level fusion (`true' multi-radar) tracking. Strobes from passive sensors can also be used. We describe various features of the system with some emphasis on the filtering technique. This is based on the Interacting Multiple Model framework where the states are Free Flight, Drag, Boost, and Auxiliary. Measurement error modeling includes the signal to noise ratio dependence; outliers and miscorrelations are handled in the same way. The launch point is calculated within one minute from the detection of the missile. The impact point, and its uncertainty region, is calculated continually by extrapolating the track state vector using the equations of planetary motion.
Methods and systems for monitoring a solid-liquid interface
Stoddard, Nathan G [Gettysburg, PA; Clark, Roger F [Frederick, MD
2011-10-04
Methods and systems are provided for monitoring a solid-liquid interface, including providing a vessel configured to contain an at least partially melted material; detecting radiation reflected from a surface of a liquid portion of the at least partially melted material; providing sound energy to the surface; measuring a disturbance on the surface; calculating at least one frequency associated with the disturbance; and determining a thickness of the liquid portion based on the at least one frequency, wherein the thickness is calculated based on L=(2m-1)v.sub.s/4f, where f is the frequency where the disturbance has an amplitude maximum, v.sub.s is the speed of sound in the material, and m is a positive integer (1, 2, 3, . . . ).
Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning
2018-03-19
Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Cheng, H. K.; Wong, Eric Y.; Dogra, V. K.
1991-01-01
Grad's thirteen-moment equations are applied to the flow behind a bow shock under the formalism of a thin shock layer. Comparison of this version of the theory with Direct Simulation Monte Carlo calculations of flows about a flat plate at finite attack angle has lent support to the approach as a useful extension of the continuum model for studying translational nonequilibrium in the shock layer. This paper reassesses the physical basis and limitations of the development with additional calculations and comparisons. The streamline correlation principle, which allows transformation of the 13-moment based system to one based on the Navier-Stokes equations, is extended to a three-dimensional formulation. The development yields a strip theory for planar lifting surfaces at finite incidences. Examples reveal that the lift-to-drag ratio is little influenced by planform geometry and varies with altitudes according to a 'bridging function' determined by correlated two-dimensional calculations.
Synthesis of novel stable compounds in the phosphorous-nitrogen system under pressure
NASA Astrophysics Data System (ADS)
Stavrou, Elissaios; Batyrev, Iskander; Ciezak-Jenkins, Jennifer; Grivickas, Paulius; Zaug, Joseph; Greenberg, Eran; Kunz, Martin
2017-06-01
We explore the possible formation of stable, and metastable at ambient conditions, polynitrogen compounds in the P-N system under pressure using in situ X-ray diffraction and Raman spectroscopy in synergy with first-principles evolutionary structural search algorithms (USPEX). We have performed numerous synthesis experiments at pressures from near ambient up to +50 GPa using both a mixture of elemental P and N2 and relevant precursors such as P3N5. Calculation of P-N extended structures at 10, 30, and 50 GPa was done using USPEX based on density functional theory (DFT) plane-waves calculations (VASP) with ultrasoft pseudopotentials. Full convex plot was found for N rich concentrations of P-N binary system. Variable content calculations were complemented by fixed concentration calculations at certain nitrogen rich concentration. Stable structures refined by DFT calculations using norm-concerning pseudopotentials. A comparison between our results and previous studies in the same system will be also given. Part of this work was performed under the auspices of the U. S. DoE by LLNS, LLC under Contract DE-AC52-07NA27344. We thank the Joint DoD/DOE Munitions Technology Development Program and the HE science C-II program at LLNL for supporting this study.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Moin, Paymann; Zhang, Aifeng; Liu, Brent
2010-03-01
Bone Age Assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on the left hand x-ray radiograph. The current BAA standard in the US is using the Greulich & Pyle (G&P) Hand Atlas, which was developed fifty years ago and was only based on Caucasian population from the Midwest US. To bring the BAA procedure up-to-date with today's population, a Digital Hand Atlas (DHA) consisting of 1400 hand images of normal children of different ethnicities, age, and gender. Based on the DHA and to solve inter- and intra-observer reading discrepancies, an automatic computer-aided bone age assessment system has been developed and tested in clinical environments. The algorithm utilizes features extracted from three regions of interests: phalanges, carpal, and radius. The features are aggregated into a fuzzy logic system, which outputs the calculated bone age. The previous BAA system only uses features from phalanges and carpal, thus BAA result for children over age of 15 is less accurate. In this project, the new radius features are incorporated into the overall BAA system. The bone age results, calculated from the new fuzzy logic system, are compared against radiologists' readings based on G&P atlas, and exhibits an improvement in reading accuracy for older children.
Cardiac Mean Electrical Axis in Thoroughbreds—Standardization by the Dubois Lead Positioning System
da Costa, Cássia Fré; Samesima, Nelson; Pastore, Carlos Alberto
2017-01-01
Background Different methodologies for electrocardiographic acquisition in horses have been used since the first ECG recordings in equines were reported early in the last century. This study aimed to determine the best ECG electrodes positioning method and the most reliable calculation of mean cardiac axis (MEA) in equines. Materials and Methods We evaluated the electrocardiographic profile of 53 clinically healthy Thoroughbreds, 38 males and 15 females, with ages ranging 2–7 years old, all reared at the São Paulo Jockey Club, in Brazil. Two ECG tracings were recorded from each animal, one using the Dubois lead positioning system, the second using the base-apex method. QRS complex amplitudes were analyzed to obtain MEA values in the frontal plane for each of the two electrode positioning methods mentioned above, using two calculation approaches, the first by Tilley tables and the second by trigonometric calculation. Results were compared between the two methods. Results There was significant difference in cardiac axis values: MEA obtained by the Tilley tables was +135.1° ± 90.9° vs. -81.1° ± 3.6° (p<0.0001), and by trigonometric calculation it was -15.0° ± 11.3° vs. -79.9° ± 7.4° (p<0.0001), base-apex and Dubois, respectively. Furthermore, Dubois method presented small range of variation without statistical or clinical difference by either calculation mode, while there was a wide variation in the base-apex method. Conclusion Dubois improved centralization of the Thoroughbreds' hearts, engendering what seems to be the real frontal plane. By either calculation mode, it was the most reliable methodology to obtain cardiac mean electrical axis in equines. PMID:28095442
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, G
2014-06-01
Purpose: In order to receive DICOM files from treatment planning system and generate patient isocenter positioning parameter file for CT laser system automatically, this paper presents a method for communication with treatment planning system and calculation of isocenter parameter for each radiation field. Methods: Coordinate transformation and laser positioning file formats were analyzed, isocenter parameter was calculated via data from DICOM CT Data and DICOM RTPLAN file. An in-house software-DicomGenie was developed based on the object-oriented program platform-Qt with DCMTK SDK (Germany OFFIS company DICOM SDK) . DicomGenie was tested for accuracy using Philips CT simulation plan system (Tumor LOC,more » Philips) and A2J CT positioning laser system (Thorigny Sur Marne, France). Results: DicomGenie successfully established DICOM communication between treatment planning system, DICOM files were received by DicomGenie and patient laser isocenter information was generated accurately. Patient laser parameter data files can be used for for CT laser system directly. Conclusion: In-house software DicomGenie received and extracted DICOM data, isocenter laser positioning data files were created by DicomGenie and can be use for A2J laser positioning system.« less
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250
Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
NASA Astrophysics Data System (ADS)
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
Fuel Characteristic Classification System version 3.0: technical documentation
Susan J. Prichard; David V. Sandberg; Roger D. Ottmar; Ellen Eberhardt; Anne Andreu; Paige Eagle; Kjell Swedin
2013-01-01
The Fuel Characteristic Classification System (FCCS) is a software module that records wildland fuel characteristics and calculates potential fire behavior and hazard potentials based on input environmental variables. The FCCS 3.0 is housed within the Integrated Fuels Treatment Decision Support System (Joint Fire Science Program 2012). It can also be run from command...
Performance analysis of an air drier for a liquid dehumidifier solar air conditioning system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queiroz, A.G.; Orlando, A.F.; Saboya, F.E.M.
1988-05-01
A model was developed for calculating the operating conditions of a non-adiabatic liquid dehumidifier used in solar air conditioning systems. In the experimental facility used for obtaining the data, air and triethylene glycol circulate countercurrently outside staggered copper tubes which are the filling of an absorption tower. Water flows inside the copper tubes, thus cooling the whole system and increasing the mass transfer potential for drying air. The methodology for calculating the mass transfer coefficient is based on the Merkel integral approach, taking into account the lowering of the water vapor pressure in equilibrium with the water glycol solution.
An Interpreted Language and System for the Visualization of Unstructured Meshes
NASA Technical Reports Server (NTRS)
Moran, Patrick J.; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
We present an interpreted language and system supporting the visualization of unstructured meshes and the manipulation of shapes defined in terms of mesh subsets. The language features primitives inspired by geometric modeling, mathematical morphology and algebraic topology. The adaptation of the topology ideas to an interpreted environment, along with support for programming constructs such, as user function definition, provide a flexible system for analyzing a mesh and for calculating with shapes defined in terms of the mesh. We present results demonstrating some of the capabilities of the language, based on an implementation called the Shape Calculator, for tetrahedral meshes in R^3.
Validation of the CME Geomagnetic Forecast Alerts Under the COMESEP Alert System
NASA Astrophysics Data System (ADS)
Dumbović, Mateja; Srivastava, Nandita; Rao, Yamini K.; Vršnak, Bojan; Devos, Andy; Rodriguez, Luciano
2017-08-01
Under the European Union 7th Framework Programme (EU FP7) project Coronal Mass Ejections and Solar Energetic Particles (COMESEP, http://comesep.aeronomy.be), an automated space weather alert system has been developed to forecast solar energetic particles (SEP) and coronal mass ejection (CME) risk levels at Earth. The COMESEP alert system uses the automated detection tool called Computer Aided CME Tracking (CACTus) to detect potentially threatening CMEs, a drag-based model (DBM) to predict their arrival, and a CME geoeffectiveness tool (CGFT) to predict their geomagnetic impact. Whenever CACTus detects a halo or partial halo CME and issues an alert, the DBM calculates its arrival time at Earth and the CGFT calculates its geomagnetic risk level. The geomagnetic risk level is calculated based on an estimation of the CME arrival probability and its likely geoeffectiveness, as well as an estimate of the geomagnetic storm duration. We present the evaluation of the CME risk level forecast with the COMESEP alert system based on a study of geoeffective CMEs observed during 2014. The validation of the forecast tool is made by comparing the forecasts with observations. In addition, we test the success rate of the automatic forecasts (without human intervention) against the forecasts with human intervention using advanced versions of the DBM and CGFT (independent tools available at the Hvar Observatory website, http://oh.geof.unizg.hr). The results indicate that the success rate of the forecast in its current form is unacceptably low for a realistic operation system. Human intervention improves the forecast, but the false-alarm rate remains unacceptably high. We discuss these results and their implications for possible improvement of the COMESEP alert system.
Klopčič, M; Koops, W J; Kuipers, A
2013-09-01
The milk production of a dairy cow is characterized by lactation production, which is calculated from daily milk yields (DMY) during lactation. The DMY is calculated from one or more milkings a day collected at the farm. Various milking systems are in use today, resulting in one or many recorded milk yields a day, from which different calculations are used to determine DMY. The primary objective of this study was to develop a mathematical function that described milk production of a dairy cow in relation to the interval between 2 milkings. The function was partly based on the biology of the milk production process. This function, called the 3K-function, was able to predict milk production over an interval of 12h, so DMY was twice this estimate. No external information is needed to incorporate this function in methods to predict DMY. Application of the function on data from different milking systems showed a good fit. This function could be a universal tool to predict DMY for a variety of milking systems, and it seems especially useful for data from robotic milking systems. Further study is needed to evaluate the function under a wide range of circumstances, and to see how it can be incorporated in existing milk recording systems. A secondary objective of using the 3K-function was to compare how much DMY based on different milking systems differed from that based on a twice-a-day milking. Differences were consistent with findings in the literature. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Internet-based wide area measurement applications in deregulated power systems
NASA Astrophysics Data System (ADS)
Khatib, Abdel-Rahman Amin
Since the deregulation of power systems was started in 1989 in the UK, many countries have been motivated to undergo deregulation. The United State started deregulation in the energy sector in California back in 1996. Since that time many other states have also started the deregulation procedures in different utilities. Most of the deregulation market in the United States now is in the wholesale market area, however, the retail market is still undergoing changes. Deregulation has many impacts on power system network operation and control. The number of power transactions among the utilities has increased and many Independent Power Producers (IPPs) now have a rich market for competition especially in the green power market. The Federal Energy Regulatory Commission (FERC) called upon utilities to develop the Regional Transmission Organization (RTO). The RTO is a step toward the national transmission grid. RTO is an independent entity that will operate the transmission system in a large region. The main goal of forming RTOs is to increase the operation efficiency of the power network under the impact of the deregulated market. The objective of this work is to study Internet based Wide Area Information Sharing (WAIS) applications in the deregulated power system. The study is the first step toward building a national transmission grid picture using information sharing among utilities. Two main topics are covered as applications for the WAIS in the deregulated power system, state estimation and Total Transfer Capability (TTC) calculations. As a first step for building this national transmission grid picture, WAIS and the level of information sharing of the state estimation calculations have been discussed. WAIS impacts to the TTC calculations are also covered. A new technique to update the TTC using on line measurements based on WAIS created by sharing state estimation is presented.
Implementation and validation of an implant-based coordinate system for RSA migration calculation.
Laende, Elise K; Deluzio, Kevin J; Hennigar, Allan W; Dunbar, Michael J
2009-10-16
An in vitro radiostereometric analysis (RSA) phantom study of a total knee replacement was carried out to evaluate the effect of implementing two new modifications to the conventional RSA procedure: (i) adding a landmark of the tibial component as an implant marker and (ii) defining an implant-based coordinate system constructed from implant landmarks for the calculation of migration results. The motivation for these two modifications were (i) to improve the representation of the implant by the markers by including the stem tip marker which increases the marker distribution (ii) to recover clinical RSA study cases with insufficient numbers of markers visible in the implant polyethylene and (iii) to eliminate errors in migration calculations due to misalignment of the anatomical axes with the RSA global coordinate system. The translational and rotational phantom studies showed no loss of accuracy with the two new measurement methods. The RSA system employing these methods has a precision of better than 0.05 mm for translations and 0.03 degrees for rotations, and an accuracy of 0.05 mm for translations and 0.15 degrees for rotations. These results indicate that the new methods to improve the interpretability, relevance, and standardization of the results do not compromise precision and accuracy, and are suitable for application to clinical data.
Activity-based differentiation of pathologists' workload in surgical pathology.
Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M
2009-06-01
Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.
Adeniyi, D A; Wei, Z; Yang, Y
2018-01-30
A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
Jamema, S V; Upreti, R R; Sharma, S; Deshpande, D D
2008-09-01
The purpose of this work is to report the results of commissioning and to establish a quality assurance (QA) program for commercial 3D treatment planning system (TPS) based on IAEA Technical Report Series 430. Eclipse v 7.3.10, (Varian Medical Systems, Palo Alto, CA, U.S.A.) TPS was commissioned for a Clinac 6EX (Varian Medical Systems, Palo Alto, CA, USA) linear accelerator. CT images of a phantom with various known in-homogeneities were acquired. The images were transferred to TPS and tested for various parameters related to patient data acquisition, anatomical modeling, plan evaluation and dose calculation. Dosimetric parameters including open, asymmetric and wedged shaped fields, oblique incidence, buildup region behavior and SSD dependence were evaluated. Representative clinical cases were tested for MU calculation and point doses. The maximum variation between the measured and the known CT numbers was 20 +/- 11.7 HU (1 SD). The results of all non-dosimetric tests were found within tolerance, however expansion at the sharp corners was found distorted. The accuracy of the DVH calculations depends on the grid size. TPS calculations of all the dosimetric parameters were in good agreement with the measured values, however for asymmetric open and wedged fields, few points were found out of tolerance. Smaller grid size calculation showed better agreement of dose calculation in the build-up region. Independent tests for MU calculation showed a variation within +/-2% (relative to planning system), meanwhile variation of 3.0% was observed when the central axis was blocked. The test results were in agreement with the tolerance specified by IAEA TRS 430. A subset of the commissioning tests has been identified as a baseline data for an ongoing QA program.
Networked event-triggered control: an introduction and research trends
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Sabih, Muhammad
2014-11-01
A physical system can be studied as either continuous time or discrete-time system depending upon the control objectives. Discrete-time control systems can be further classified into two categories based on the sampling: (1) time-triggered control systems and (2) event-triggered control systems. Time-triggered systems sample states and calculate controls at every sampling instant in a periodic fashion, even in cases when states and calculated control do not change much. This indicates unnecessary and useless data transmission and computation efforts of a time-triggered system, thus inefficiency. For networked systems, the transmission of measurement and control signals, thus, cause unnecessary network traffic. Event-triggered systems, on the other hand, have potential to reduce the communication burden in addition to reducing the computation of control signals. This paper provides an up-to-date survey on the event-triggered methods for control systems and highlights the potential research directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, L; Eldib, A; Li, J
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reducemore » the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
... for each data set that is collected during the initial performance test. A single composite value of... Multiple Zone Concentrations Calculations Procedure based on inlet and outlet concentrations (Column A of... composite value of Ks discussed in section III.C of this appendix. This value of Ks is calculated during the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Özdemir, Semra Bayat; Demiralp, Metin
The determination of the energy states is highly studied issue in the quantum mechanics. Based on expectation values dynamics, energy states can be observed. But conditions and calculations vary depending on the created system. In this work, a symmetric exponential anharmonic oscillator is considered and development of a recursive approximation method is studied to find its ground energy state. The use of majorant values facilitates the approximate calculation of expectation values.
Mathematical Model of Heat Transfer in the Catalyst Granule with Point Reaction Centers
NASA Astrophysics Data System (ADS)
Derevich, I. V.; Fokina, A. Yu.
2018-01-01
This paper considers a catalyst granule with a porous ceramic chemically inert base and active point centers, at which an exothermic reaction of synthesis takes place. The rate of a chemical reaction depends on temperature by the Arrhenius law. The heat is removed from the catalyst granule surface to the synthesis products by heat transfer. Based on the idea of self-consistent field, a closed system of equations is constructed for calculating the temperatures of the active centers. As an example, a catalyst granule of the Fischer-Tropsch synthesis with active metallic cobalt particles is considered. The stationary temperatures of the active centers are calculated by the timedependent technique by solving a system of ordinary differential equations. The temperature distribution inside the granule has been found for the local centers located on one diameter of the granule and distributed randomly in the granule's volume. The existence of the critical temperature inside the reactor has been established, the excess of which leads to substantial superheating of local centers. The temperature distribution with local reaction centers differs qualitatively from the granule temperature calculated in the homogeneous approximation. The results of calculations are given.
Richings, Gareth W; Habershon, Scott
2017-09-12
We describe a method for performing nuclear quantum dynamics calculations using standard, grid-based algorithms, including the multiconfiguration time-dependent Hartree (MCTDH) method, where the potential energy surface (PES) is calculated "on-the-fly". The method of Gaussian process regression (GPR) is used to construct a global representation of the PES using values of the energy at points distributed in molecular configuration space during the course of the wavepacket propagation. We demonstrate this direct dynamics approach for both an analytical PES function describing 3-dimensional proton transfer dynamics in malonaldehyde and for 2- and 6-dimensional quantum dynamics simulations of proton transfer in salicylaldimine. In the case of salicylaldimine we also perform calculations in which the PES is constructed using Hartree-Fock calculations through an interface to an ab initio electronic structure code. In all cases, the results of the quantum dynamics simulations are in excellent agreement with previous simulations of both systems yet do not require prior fitting of a PES at any stage. Our approach (implemented in a development version of the Quantics package) opens a route to performing accurate quantum dynamics simulations via wave function propagation of many-dimensional molecular systems in a direct and efficient manner.
NASA Astrophysics Data System (ADS)
Polichtchouk, Yuri; Ryukhko, Viatcheslav; Tokareva, Olga; Alexeeva, Mary
2002-02-01
Geoinformation modeling system structure for assessment of the environmental impact of atmospheric pollution on forest- swamp ecosystems of West Siberia is considered. Complex approach to the assessment of man-caused impact based on the combination of sanitary-hygienic and landscape-geochemical approaches is reported. Methodical problems of analysis of atmosphere pollution impact on vegetable biosystems using geoinformation systems and remote sensing data are developed. Landscape structure of oil production territories in southern part of West Siberia are determined on base of processing of space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches in territories of oil fields are considered. For instance, a pollution zones were revealed modeling of contaminants dispersal in atmosphere by standard model. Polluted landscapes areas are calculated depending on oil production volume. It is shown calculated data is well approximated by polynomial models.
NASA Astrophysics Data System (ADS)
Shabliy, L. S.; Malov, D. V.; Bratchinin, D. S.
2018-01-01
In the article the description of technique for simulation of valves for pneumatic-hydraulic system of liquid-propellant rocket engine (LPRE) is given. Technique is based on approach of computational hydrodynamics (Computational Fluid Dynamics - CFD). The simulation of a differential valve used in closed circuit LPRE supply pipes of fuel components is performed to show technique abilities. A schematic and operation algorithm of this valve type is described in detail. Also assumptions made in the construction of the geometric model of the hydraulic path of the valve are described in detail. The calculation procedure for determining valve hydraulic characteristics is given. Based on these calculations certain hydraulic characteristics of the valve are given. Some ways of usage of the described simulation technique for research the static and dynamic characteristics of the elements of the pneumatic-hydraulic system of LPRE are proposed.
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications
NASA Astrophysics Data System (ADS)
Ma, Q.
2015-12-01
The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS and IOS models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.
Comparing Ultraviolet Spectra against Calculations: Year 2 Results
NASA Technical Reports Server (NTRS)
Peterson, Ruth C.
2004-01-01
The five-year goal of this effort is to calculate high fidelity mid-W spectra for individual stars and stellar systems for a wide range of ages, abundances, and abundance ratios. In this second year, the comparison of our calculations against observed high-resolution mid- W spectra was extended to stars as metal-rich as the Sun, and to hotter and cooler stars, further improving the list of atomic line parameters used in the calculations. We also published the application of our calculations based on the earlier list of line parameters to the observed mid-UV and optical spectra of a mildly metal-poor globular cluster in the nearby Andromeda galaxy, Messier 3 1.
Xie, Ping; Zhao, Jiang Yan; Wu, Zi Yi; Sang, Yan Fang; Chen, Jie; Li, Bin Bin; Gu, Hai Ting
2018-04-01
The analysis of inconsistent hydrological series is one of the major problems that should be solved for engineering hydrological calculation in changing environment. In this study, the diffe-rences of non-consistency and non-stationarity were analyzed from the perspective of composition of hydrological series. The inconsistent hydrological phenomena were generalized into hydrological processes with inheritance, variability and evolution characteristics or regulations. Furthermore, the hydrological genes were identified following the theory of biological genes, while their inheritance bases and variability bases were determined based on composition of hydrological series under diffe-rent time scales. To identify and test the components of hydrological genes, we constructed a diagnosis system of hydrological genes. With the P-3 distribution as an example, we described the process of construction and expression of the moment genes to illustrate the inheritance, variability and evolution principles of hydrological genes. With the annual minimum 1-month runoff series of Yunjinghong station in Lancangjiang River basin as an example, we verified the feasibility and practicability of hydrological gene theory for the calculation of inconsistent hydrological frequency. The results showed that the method could be used to reveal the evolution of inconsistent hydrological series. Therefore, it provided a new research pathway for engineering hydrological calculation in changing environment and an essential reference for the assessment of water security.
Jia, Yun-Fang; Gao, Chun-Ying; He, Jia; Feng, Dao-Fu; Xing, Ke-Li; Wu, Ming; Liu, Yang; Cai, Wen-Sheng; Feng, Xi-Zeng
2012-08-21
Multi biomarkers' assays are of great significance in clinical diagnosis. A label-free multi tumor markers' parallel detection system was proposed based on a light addressable potentiometric sensor (LAPS). Arrayed LAPS chips with basic structure of Si(3)N(4)-SiO(2)-Si were prepared on silicon wafers, and the label-free parallel detection system for this component was developed with user friendly controlling interfaces. Then the l-3,4-dihydroxyphenyl-alanine (L-Dopa) hydrochloric solution was used to initiate the surface of LAPS. The L-Dopa immobilization state was investigated by the theoretical calculation. L-Dopa initiated LAPS' chip was biofunctionalized respectively by the antigens and antibodies of four tumor markers, α-fetoprotein (AFP), carcinoembryonic antigen (CEA), cancer antigen 19-9 (CA19-9) and Ferritin. Then unlabeled antibodies and antigens of these four biomarkers were detected by the proposed detection systems. Furthermore physical and measuring principles in this system were described, and qualitative understanding for experimental data were given. The measured response ranges were compared with their clinical cutoff values, and sensitivities were calculated by OriginLab. The results indicate that this bioinitiated LAPS based label-free detection system may offer a new choice for the realization of unlabeled multi tumor markers' clinical assay.
Adsorption of methanol molecule on graphene: Experimental results and first-principles calculations
NASA Astrophysics Data System (ADS)
Zhao, X. W.; Tian, Y. L.; Yue, W. W.; Chen, M. N.; Hu, G. C.; Ren, J. F.; Yuan, X. B.
2018-04-01
Adsorption properties of methanol molecule on graphene surface are studied both theoretically and experimentally. The adsorption geometrical structures, adsorption energies, band structures, density of states and the effective masses are obtained by means of first-principles calculations. It is found that the electronic characteristics and conductivity of graphene are sensitive to the methanol molecule adsorption. After adsorption of methanol molecule, bandgap appears. With the increasing of the adsorption distance, the bandgap, adsorption energy and effective mass of the adsorption system decreased, hence the resistivity of the system decreases gradually, these results are consistent with the experimental results. All these calculations and experiments indicate that the graphene-based sensors have a wide range of applications in detecting particular molecules.
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
Locating binding poses in protein-ligand systems using reconnaissance metadynamics
Söderhjelm, Pär; Tribello, Gareth A.; Parrinello, Michele
2012-01-01
A molecular dynamics-based protocol is proposed for finding and scoring protein-ligand binding poses. This protocol uses the recently developed reconnaissance metadynamics method, which employs a self-learning algorithm to construct a bias that pushes the system away from the kinetic traps where it would otherwise remain. The exploration of phase space with this algorithm is shown to be roughly six to eight times faster than unbiased molecular dynamics and is only limited by the time taken to diffuse about the surface of the protein. We apply this method to the well-studied trypsin–benzamidine system and show that we are able to refind all the poses obtained from a reference EADock blind docking calculation. These poses can be scored based on the length of time the system remains trapped in the pose. Alternatively, one can perform dimensionality reduction on the output trajectory and obtain a map of phase space that can be used in more expensive free-energy calculations. PMID:22440749
Locating binding poses in protein-ligand systems using reconnaissance metadynamics.
Söderhjelm, Pär; Tribello, Gareth A; Parrinello, Michele
2012-04-03
A molecular dynamics-based protocol is proposed for finding and scoring protein-ligand binding poses. This protocol uses the recently developed reconnaissance metadynamics method, which employs a self-learning algorithm to construct a bias that pushes the system away from the kinetic traps where it would otherwise remain. The exploration of phase space with this algorithm is shown to be roughly six to eight times faster than unbiased molecular dynamics and is only limited by the time taken to diffuse about the surface of the protein. We apply this method to the well-studied trypsin-benzamidine system and show that we are able to refind all the poses obtained from a reference EADock blind docking calculation. These poses can be scored based on the length of time the system remains trapped in the pose. Alternatively, one can perform dimensionality reduction on the output trajectory and obtain a map of phase space that can be used in more expensive free-energy calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Accuracy and coverage of the modernized Polish Maritime differential GPS system
NASA Astrophysics Data System (ADS)
Specht, Cezary
2011-01-01
The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.
NORTH PORTAL-HOT WATER CIRCULATION PUMP CALCULATION-SHOP BUILDING #5006
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Blackstone
1996-01-25
The purpose of this design analysis and calculation is to size a circulating pump for the service hot water system in the Shop Building 5006, in accordance with the Uniform Plumbing Code (Section 4.4.1) and U.S. Department of Energy Order 6430.1A-1540 (Section 4.4.2). The method used for the calculation is based on Reference 5.2. This consists of determining the total heat transfer from the service hot water system piping to the surrounding environment. The heat transfer is then used to define the total pumping capacity based on a given temperature change in the circulating hot water as it flows throughmore » the closed loop piping system. The total pumping capacity is used to select a pump model from manufacturer's literature. This established the head generation for that capacity and particular pump model. The total length of all hot water supply and return piping including fittings is then estimated from the plumbing drawings which defines the pipe friction losses that must fit within the available pump head. Several iterations may be required before a pump can be selected that satisfies the head-capacity requirements.« less
Arbib, Zouhayr; de Godos Crespo, Ignacio; Corona, Enrique Lara; Rogalla, Frank
2017-06-01
Microalgae culture in high rate algae ponds (HRAP) is an environmentally friendly technology for wastewater treatment. However, for the implementation of these systems, a better understanding of the oxygenation potential and the influence of climate conditions is required. In this work, the rates of oxygen production, consumption, and exchange with the atmosphere were calculated under varying conditions of solar irradiance and dilution rate during six months of operation in a real scale unit. This analysis allowed determining the biological response of these dynamic systems. The rates of oxygen consumption measured were considerably higher than the values calculated based on the organic loading rate. The response to light intensity in terms of oxygen production in the bioreactor was described with one of the models proposed for microalgae culture in dense concentrations. This model is based on the availability of light inside the culture and the specific response of microalgae to this parameter. The specific response to solar radiation intensity showed a reasonable stability in spite of the fluctuations due to meteorological conditions. The methodology developed is a useful tool for optimization and prediction of the performance of these systems.
The forward modelling and analysis of magnetic field on the East Asia area using tesseroids
NASA Astrophysics Data System (ADS)
Chen, Z.; Meng, X.; Xu, G.
2017-12-01
As the progress of airborne and satellite magnetic survey, high-resolution magnetic data could be measured at different scale. In order to test and improve the accuracy of the existing crustal model, the forward modeling method is usually used to simulate the magnetic field of the lithosphere. Traditional models to forward modelling the magnetic field are based on the Cartesian coordinate system, and are always used to calculate the magnetic field of the local and small area. However, the Cartesian coordinate system is not an ideal choice for calculating the magnetic field of the global or continental area at the height of the satellite and Earth's curvature cannot be ignored in this situation. The spherical element (called tesseroids) can be used as a model element in the spherical coordinate system to solve this problem. On the basis of studying the principle of this forward method, we focus the selection of data source and the mechanism of adaptive integration. Then we calculate the magnetic anomaly data of East Asia area based on the model Crust1.0. The results presented the crustal susceptibility distribution, which was well consistent with the basic tectonic features in the study area.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Isospin Conservation in Neutron Rich Systems of Heavy Nuclei
NASA Astrophysics Data System (ADS)
Jain, Ashok Kumar; Garg, Swati
2018-05-01
It is generally believed that isospin would diminish in its importance as we go towards heavy mass region due to isospin mixing caused by the growing Coulomb forces. However, it was realized quite early that isospin could become an important and useful quantum number for all nuclei including heavy nuclei due to neutron richness of the systems [1]. Lane and Soper [2] also showed in a theoretical calculation that isospin indeed remains quite good in heavy mass neutron rich systems. In this paper, we present isospin based calculations [3, 4] for the fission fragment distributions obtained from heavy-ion fusion fission reactions. We discuss in detail the procedure adopted to assign the isospin values and the role of neutron multiplicity data in obtaining the total fission fragment distributions. We show that the observed fragment distributions can be reproduced rather reasonably well by the calculations based on the idea of conservation of isospin. This is a direct experimental evidence of the validity of isospin in heavy nuclei, which arises largely due to the neutron-rich nature of heavy nuclei and their fragments. This result may eventually become useful for the theories of nuclear fission and also in other practical applications.
First-principles calculations of the interaction between hydrogen and 3d alloying atom in nickel
NASA Astrophysics Data System (ADS)
Liu, Wenguan; Qian, Yuan; Zhang, Dongxun; Liu, Wei; Han, Han
2015-10-01
Knowledge of the behavior of hydrogen (H) in Ni-based alloy is essential for the prediction of Tritium behavior in Molten Salt Reactor. First-principles calculations were performed to investigate the interaction between H and 3d transition metal (TM) alloying atom in Ni-based alloy. H prefers the octahedral interstitial site to the tetrahedral interstitial site energetically. Most of the 3d TM elements (except Zn) attract H. The attraction to H in the Ni-TM-H system can be mainly attributed to the differences in electronegativity. With the large electronegativity, H and Ni gain electrons from the other TM elements, resulting in the enhanced Ni-H bonds which are the source of the attraction to H in the Ni-TM-H system. The obviously covalent-like Cr-H and Co-H bindings are also beneficial to the attraction to H. On the other hand, the repulsion to H in the Ni-Zn-H system is due to the stable electronic configuration of Zn. We mainly utilize the results calculated in 32-atom supercell which corresponds to the case of a relatively high concentration of hydrogen. Our results are in good agreement with the experimental ones.
Lin, Hai; Zhao, Yan; Tishchenko, Oksana; Truhlar, Donald G
2006-09-01
The multiconfiguration molecular mechanics (MCMM) method is a general algorithm for generating potential energy surfaces for chemical reactions by fitting high-level electronic structure data with the help of molecular mechanical (MM) potentials. It was previously developed as an extension of standard MM to reactive systems by inclusion of multidimensional resonance interactions between MM configurations corresponding to specific valence bonding patterns, with the resonance matrix element obtained from quantum mechanical (QM) electronic structure calculations. In particular, the resonance matrix element is obtained by multidimensional interpolation employing a finite number of geometries at which electronic-structure calculations of the energy, gradient, and Hessian are carried out. In this paper, we present a strategy for combining MCMM with hybrid quantum mechanical molecular mechanical (QM/MM) methods. In the new scheme, electronic-structure information for obtaining the resonance integral is obtained by means of hybrid QM/MM calculations instead of fully QM calculations. As such, the new strategy can be applied to the studies of very large reactive systems. The new MCMM scheme is tested for two hydrogen-transfer reactions. Very encouraging convergence is obtained for rate constants including tunneling, suggesting that the new MCMM method, called QM/MM-MCMM, is a very general, stable, and efficient procedure for generating potential energy surfaces for large reactive systems. The results are found to converge well with respect to the number of Hessians. The results are also compared to calculations in which the resonance integral data are obtained by pure QM, and this illustrates the sensitivity of reaction rate calculations to the treatment of the QM-MM border. For the smaller of the two systems, comparison is also made to direct dynamics calculations in which the potential energies are computed quantum mechanically on the fly.
Kimura, Koji; Sawa, Akihiro; Akagi, Shinji; Kihira, Kenji
2007-06-01
We have developed an original system to conduct surgical site infection (SSI) surveillance. This system accumulates SSI surveillance information based on the National Nosocomial Infections Surveillance (NNIS) System and the Japanese Nosocomial Infections Surveillance (JNIS) System. The features of this system are as follows: easy input of data, high generality, data accuracy, SSI rate by operative procedure and risk index category (RIC) can be promptly calculated and compared with the current NNIS SSI rate, and the SSI rates and accumulated data can be exported electronically. Using this system, we monitored 798 patients in 24 operative procedure categories in the Digestive Organs Surgery Department of Mazda Hospital, Mazda Motor Corporation, from January 2004 through December 2005. The total number and rate of SSI were 47 and 5.89%, respectively. The SSI rates of 777 patients were calculated based on 15 operative procedure categories and Risk Index Categories (RIC). The highest SSI rate was observed in the rectum surgery of RIC 1 (30%), followed by the colon surgery of RIC3 (28.57%). About 30% of the isolated infecting bacteria were Enterococcus faecalis, Staphylococcus aureus, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Escherichia coli. Using quantification theory type 2, the American Society of Anesthesiology score (4.531), volume of hemorrhage under operation (3.075), wound classification (1.76), operation time (1.352), and history of diabetes (0.989) increased to higher ranks as factors for SSI. Therefore, we evaluated this system as a useful tool in safety control for operative procedures.
Mermelstein, Daniel J; Lin, Charles; Nelson, Gard; Kretsch, Rachael; McCammon, J Andrew; Walker, Ross C
2018-07-15
Alchemical free energy (AFE) calculations based on molecular dynamics (MD) simulations are key tools in both improving our understanding of a wide variety of biological processes and accelerating the design and optimization of therapeutics for numerous diseases. Computing power and theory have, however, long been insufficient to enable AFE calculations to be routinely applied in early stage drug discovery. One of the major difficulties in performing AFE calculations is the length of time required for calculations to converge to an ensemble average. CPU implementations of MD-based free energy algorithms can effectively only reach tens of nanoseconds per day for systems on the order of 50,000 atoms, even running on massively parallel supercomputers. Therefore, converged free energy calculations on large numbers of potential lead compounds are often untenable, preventing researchers from gaining crucial insight into molecular recognition, potential druggability and other crucial areas of interest. Graphics Processing Units (GPUs) can help address this. We present here a seamless GPU implementation, within the PMEMD module of the AMBER molecular dynamics package, of thermodynamic integration (TI) capable of reaching speeds of >140 ns/day for a 44,907-atom system, with accuracy equivalent to the existing CPU implementation in AMBER. The implementation described here is currently part of the AMBER 18 beta code and will be an integral part of the upcoming version 18 release of AMBER. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Marjani, Azam
2016-07-01
For biomolecules and cell particles purification and separation in biological engineering, besides the chromatography as mostly applied process, aqueous two-phase systems (ATPS) are of the most favorable separation processes that are worth to be investigated in thermodynamic theoretically. In recent years, thermodynamic calculation of ATPS properties has attracted much attention due to their great applications in chemical industries such as separation processes. These phase calculations of ATPS have inherent complexity due to the presence of ions and polymers in aqueous solution. In this work, for target ternary systems of polyethylene glycol (PEG4000)-salt-water, thermodynamic investigation for constituent systems with three salts (NaCl, KCl and LiCl) has been carried out as PEG is the most favorable polymer in ATPS. The modified perturbed hard sphere chain (PHSC) equation of state (EOS), extended Debye-Hückel and Pitzer models were employed for calculation of activity coefficients for the considered systems. Four additional statistical parameters were considered to ensure the consistency of correlations and introduced as objective functions in the particle swarm optimization algorithm. The results showed desirable agreement to the available experimental data, and the order of recommendation of studied models is PHSC EOS > extended Debye-Hückel > Pitzer. The concluding remark is that the all the employed models are reliable in such calculations and can be used for thermodynamic correlation/predictions; however, by using an ion-based parameter calculation method, the PHSC EOS reveals both reliability and universality of applications.
Roberts, L.N.; Biewick, L.R.
1999-01-01
This report documents a comparison of two methods of resource calculation that are being used in the National Coal Resource Assessment project of the U.S. Geological Survey (USGS). Tewalt (1998) discusses the history of using computer software packages such as GARNET (Graphic Analysis of Resources using Numerical Evaluation Techniques), GRASS (Geographic Resource Analysis Support System), and the vector-based geographic information system (GIS) ARC/INFO (ESRI, 1998) to calculate coal resources within the USGS. The study discussed here, compares resource calculations using ARC/INFO* (ESRI, 1998) and EarthVision (EV)* (Dynamic Graphics, Inc. 1997) for the coal-bearing John Henry Member of the Straight Cliffs Formation of Late Cretaceous age in the Kaiparowits Plateau of southern Utah. Coal resource estimates in the Kaiparowits Plateau using ARC/INFO are reported in Hettinger, and others, 1996.
NASA Astrophysics Data System (ADS)
Masrour, R.; Hlil, E. K.
2016-08-01
Self-consistent ab initio calculations based on density-functional theory and using both full potential linearized augmented plane wave and Korring-Kohn-Rostoker-coherent potential approximation methods, are performed to investigate both electronic and magnetic properties of the Ga1-xMnxN system. Magnetic moments considered to lie along (001) axes are computed. Obtained data from ab initio calculations are used as input for the high temperature series expansions (HTSEs) calculations to compute other magnetic parameters such as the magnetic phase diagram and the critical exponent. The increasing of the dilution x in this system has allowed to verify a series of HTSEs predictions on the possibility of ferromagnetism in dilute magnetic insulators and to demonstrate that the interaction changes from antiferromagnetic to ferromagnetic passing through the spins glace phase.
An On-Line Nutrition Information System for the Clinical Dietitian
Petot, Grace J.; Houser, Harold B.; Uhrich, Roberta V.
1980-01-01
A university based computerized nutrient data base has been integrated into an on-line nutrition information system in a large acute care hospital. Key elements described in the design and installation of the system are the addition of hospital menu items to the existing nutrient data base, the creation of a unique recipe file in the computer, production of a customized menu/nutrient handbook, preparation of forms and establishment of output formats. Standardization of nutrient calculations in the clinical and food production areas, variety and purposes of various format options, the advantages of timesharing and plans for expansion of the system are discussed.
Study of fuel cell on-site, integrated energy systems in residential/commercial applications
NASA Technical Reports Server (NTRS)
Wakefield, R. A.; Karamchetty, S.; Rand, R. H.; Ku, W. S.; Tekumalla, V.
1980-01-01
Three building applications were selected for a detailed study: a low rise apartment building; a retail store, and a hospital. Building design data were then specified for each application, based on the design and construction of typical, actual buildings. Finally, a computerized building loads analysis program was used to estimate hourly end use load profiles for each building. Conventional and fuel cell based energy systems were designed and simulated for each building in each location. Based on the results of a computer simulation of each energy system, levelized annual costs and annual energy consumptions were calculated for all systems.
Electronic properties of a molecular system with Platinum
NASA Astrophysics Data System (ADS)
Ojeda, J. H.; Medina, F. G.; Becerra-Alonso, David
2017-10-01
The electronic properties are studied using a finite homogeneous molecule called Trans-platinum-linked oligo(tetraethenylethenes). This system is composed of individual molecules such as benzene rings, platinum, Phosphore and Sulfur. The mechanism for the study of the electron transport through this system is based on placing the molecule between metal contacts to control the current through the molecular system. We study this molecule based on the tight-binding approach for the calculation of the transport properties using the Landauer-Büttiker formalism and the Fischer-Lee relationship, based on a semi-analytic Green's function method within a real-space renormalization approach. Our results show a significant agreement with experimental measurements.
Expert system for the design of heating, ventilating, and air-conditioning systems. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camejo, P.J.
1989-12-01
Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are neededmore » and have been developed to join the separate knowledge bases into one simple-to-use program unit.« less
Squeezed light from multi-level closed-cycling atomic systems
NASA Technical Reports Server (NTRS)
Xiao, Min; Zhu, Yi-Fu
1994-01-01
Amplitude squeezing is calculated for multi-level closed-cycling atomic systems. These systems can last without atomic population inversion in any atomic bases. Maximum squeezing is obtained for the parameters in the region of lasing without inversion. A practical four-level system and an ideal three-level system are presented. The latter system is analyzed in some detail and the mechanism of generating amplitude squeezing is discussed.
A system for 3D representation of burns and calculation of burnt skin area.
Prieto, María Felicidad; Acha, Begoña; Gómez-Cía, Tomás; Fondón, Irene; Serrano, Carmen
2011-11-01
In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
PyGlobal: A toolkit for automated compilation of DFT-based descriptors.
Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A
2016-06-15
Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Rapid automatic keyword extraction for information retrieval and analysis
Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA
2012-03-06
Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
NASA Technical Reports Server (NTRS)
Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.
1991-01-01
A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.
Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren
2016-06-10
A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.
A rapid calculation system for tsunami propagation in Japan by using the AQUA-MT/CMT solutions
NASA Astrophysics Data System (ADS)
Nakamura, T.; Suzuki, W.; Yamamoto, N.; Kimura, H.; Takahashi, N.
2017-12-01
We developed a rapid calculation system of geodetic deformations and tsunami propagation in and around Japan. The system automatically conducts their forward calculations by using point source parameters estimated by the AQUA system (Matsumura et al., 2006), which analyze magnitude, hypocenter, and moment tensors for an event occurring in Japan in 3 minutes of the origin time at the earliest. An optimized calculation code developed by Nakamura and Baba (2016) is employed for the calculations on our computer server with 12 core processors of Intel Xeon 2.60 GHz. Assuming a homogeneous fault slip in the single fault plane as the source fault, the developed system calculates each geodetic deformation and tsunami propagation by numerically solving the 2D linear long-wave equations for the grid interval of 1 arc-min from two fault orientations simultaneously; i.e., one fault and its conjugate fault plane. Because fault models based on moment tensor analyses of event data are used, the system appropriately evaluate tsunami propagation even for unexpected events such as normal faulting in the subduction zone, which differs with the evaluation of tsunami arrivals and heights from a pre-calculated database by using fault models assuming typical types of faulting in anticipated source areas (e.g., Tatehata, 1998; Titov et al., 2005; Yamamoto et al., 2016). By the complete automation from event detection to output graphical figures, the calculation results can be available via e-mail and web site in 4 minutes of the origin time at the earliest. For moderate-sized events such as M5 to 6 events, the system helps us to rapidly investigate whether amplitudes of tsunamis at nearshore and offshore stations exceed a noise level or not, and easily identify actual tsunamis at the stations by comparing with obtained synthetic waveforms. In the case of using source models investigated from GNSS data, such evaluations may be difficult because of the low resolution of sources due to a low signal to noise ratio at land stations. For large to huge events in offshore areas, the developed system may be useful to decide to starting or stopping preparations and precautions against tsunami arrivals, because calculation results including arrival times and heights of initial and maximum waves can be rapidly available before their arrivals at coastal areas.
NASA Astrophysics Data System (ADS)
Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.
2016-08-01
Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.
New approach to analyzing soil-building systems
Safak, E.
1998-01-01
A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.
Adaptations in Electronic Structure Calculations in Heterogeneous Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamudupula, Sai
Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less
Development of a new multi-modal Monte-Carlo radiotherapy planning system.
Kumada, H; Nakamura, T; Komeda, M; Matsumura, A
2009-07-01
A new multi-modal Monte-Carlo radiotherapy planning system (developing code: JCDS-FX) is under development at Japan Atomic Energy Agency. This system builds on fundamental technologies of JCDS applied to actual boron neutron capture therapy (BNCT) trials in JRR-4. One of features of the JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multi-purpose particle Monte-Carlo transport code. Hence application of PHITS enables to evaluate total doses given to a patient by a combined modality therapy. Moreover, JCDS-FX with PHITS can be used for the study of accelerator based BNCT. To verify calculation accuracy of the JCDS-FX, dose evaluations for neutron irradiation of a cylindrical water phantom and for an actual clinical trial were performed, then the results were compared with calculations by JCDS with MCNP. The verification results demonstrated that JCDS-FX is applicable to BNCT treatment planning in practical use.
NASA Astrophysics Data System (ADS)
Heine, A.; Berger, M.
The classical meaning of motion design is the usage of laws of motion with convenient characteristic values. Whereas the software MOCAD supports a graphical and interactive mode of operation, among others by using an automatic polynomial interpolation. Besides a direct coupling for motion control systems, different file formats for data export are offered. The calculation of plane and spatial cam mechanisms is also based on the data, generated in the motion design module. Drawing on an example of an intermittent cam mechanism with an inside cam profile used as a new drive concept for indexing tables, the influence of motion design on the transmission properties is shown. Another example gives an insight into the calculation and export of envelope curves for cylindrical cam mechanisms. The gained geometry data can be used for generating realistic 3D-models in the CAD-system Pro/ENGINEER, using a special data exchange format.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector
NASA Astrophysics Data System (ADS)
Eleon, C.; Perot, B.; Carasco, C.
2010-07-01
The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.
Code of Federal Regulations, 2011 CFR
2011-10-01
....171 of this part, into a single per treatment base rate developed from 2007 claims data. The steps to..., or 2009. CMS removes the effects of enrollment and price growth from total expenditures for 2007...
Ernstbrunner, L; Werthel, J-D; Hatta, T; Thoreson, A R; Resch, H; An, K-N; Moroder, P
2016-10-01
The bony shoulder stability ratio (BSSR) allows for quantification of the bony stabilisers in vivo. We aimed to biomechanically validate the BSSR, determine whether joint incongruence affects the stability ratio (SR) of a shoulder model, and determine the correct parameters (glenoid concavity versus humeral head radius) for calculation of the BSSR in vivo. Four polyethylene balls (radii: 19.1 mm to 38.1 mm) were used to mould four fitting sockets in four different depths (3.2 mm to 19.1mm). The SR was measured in biomechanical congruent and incongruent experimental series. The experimental SR of a congruent system was compared with the calculated SR based on the BSSR approach. Differences in SR between congruent and incongruent experimental conditions were quantified. Finally, the experimental SR was compared with either calculated SR based on the socket concavity or plastic ball radius. The experimental SR is comparable with the calculated SR (mean difference 10%, sd 8%; relative values). The experimental incongruence study observed almost no differences (2%, sd 2%). The calculated SR on the basis of the socket concavity radius is superior in predicting the experimental SR (mean difference 10%, sd 9%) compared with the calculated SR based on the plastic ball radius (mean difference 42%, sd 55%). The present biomechanical investigation confirmed the validity of the BSSR. Incongruence has no significant effect on the SR of a shoulder model. In the event of an incongruent system, the calculation of the BSSR on the basis of the glenoid concavity radius is recommended.Cite this article: L. Ernstbrunner, J-D. Werthel, T. Hatta, A. R. Thoreson, H. Resch, K-N. An, P. Moroder. Biomechanical analysis of the effect of congruence, depth and radius on the stability ratio of a simplistic 'ball-and-socket' joint model. Bone Joint Res 2016;5:453-460. DOI: 10.1302/2046-3758.510.BJR-2016-0078.R1. © 2016 Ernstbrunner et al.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
First-Principles Study of Antimony Doping Effects on the Iron-Based Superconductor CaFe(SbxAs1-x)2
NASA Astrophysics Data System (ADS)
Nagai, Yuki; Nakamura, Hiroki; Machida, Masahiko; Kuroki, Kazuhiko
2015-09-01
We study antimony doping effects on the iron-based superconductor CaFe(SbxAs1-x)2 by using the first-principles calculation. The calculations reveal that the substitution of a doped antimony atom into As of the chainlike As layers is more stable than that into FeAs layers. This prediction can be checked by experiments. Our results suggest that doping homologous elements into the chainlike As layers, which only exist in the novel 112 system, is responsible for rising up the critical temperature. We discuss antimony doping effects on the electronic structure. It is found that the calculated band structures with and without the antimony doping are similar to each other within our framework.
Clinical applications of advanced rotational radiation therapy
NASA Astrophysics Data System (ADS)
Nalichowski, Adrian
Purpose: With a fast adoption of emerging technologies, it is critical to fully test and understand its limits and capabilities. In this work we investigate new graphic processing unit (GPU) based treatment planning algorithm and its applications in helical tomotherapy dose delivery. We explore the limits of the system by applying it to challenging clinical cases of total marrow irradiation (TMI) and stereotactic radiosurgery (SRS). We also analyze the feasibility of alternative fractionation schemes for total body irradiation (TBI) and TMI based on reported historical data on lung dose and interstitial pneumonitis (IP) incidence rates. Methods and Materials: An anthropomorphic phantom was used to create TMI plans using the new GPU based treatment planning system and the existing CPU cluster based system. Optimization parameters were selected based on clinically used values for field width, modulation factor and pitch. Treatment plans were also created on Eclipse treatment planning system (Varian Medical Systems Inc, Palo Alto, CA) using volumetric modulated arc therapy (VMAT) for dose delivery on IX treatment unit. A retrospective review was performed of 42 publications that reported IP rates along with lung dose, fractionation regimen, dose rate and chemotherapy. The analysis consisted of nearly thirty two hundred patients and 34 unique radiation regimens. Multivariate logistic regression was performed to determine parameters associated with IP and establish does response function. Results: The results showed very good dosimetric agreement between the GPU and CPU calculated plans. The results from SBRT study show that GPU planning system can maintain 90% target coverage while meeting all the constraints of RTOG 0631 protocol. Beam on time for Tomotherapy and flattening filter free RapidArc was much faster than for Vero or Cyberknife. Retrospective data analysis showed that lung dose and Cyclophosphomide (Cy) are both predictors of IP in TBI/TMI treatments. The dose rate was not found to be an independent risk factor for IP. The model failed to establish accurate dose response function, but the discrete data indicated a radiation dose threshold of 7.6Gy (EQD2_repair) and 120 mg/kg of Cy below which no IP cases were reported. Conclusion: The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation. The new system was able to achieve more uniform dose distribution throughout the target volume and steeper dose fall off, resulting in superior OAR sparing when compared to Eclipse treatment planning system for VMAT delivery. The machine optimization parameters tested for TMI cases provide a comprehensive overview of the capabilities of the treatment planning station and associated helical delivery system. The new system also proved to be dosimetrically compatible with other leading modalities for treatments of small and complicated target volumes and was even superior when treatment delivery times were compared. These finding demonstrate that the advanced treatment planning and delivery system from TomoTherapy is well suitable for treatments of complicated cases such as TMI and SRS and it's often dosimetrically and/or logistically superior to other modalities. The new planning system can easily meet the constraint of threshold lung dose established in this study. The results presented here on the capabilities of Tomotherapy and on the identified lung dose threshold provide an opportunity to explore alternative fractionation schemes without sacrificing target coverage or lung toxicity. (Abstract shortened by ProQuest.).
Thermodynamic Modeling of Ag-Ni System Combining Experiments and Molecular Dynamic Simulation
NASA Astrophysics Data System (ADS)
Rajkumar, V. B.; Chen, Sinn-wen
2017-04-01
Ag-Ni is a simple and important system with immiscible liquids and (Ag,Ni) phases. Previously, this system has been thermodynamically modeled utilizing certain thermochemical and phase equilibria information based on conjecture. An attempt is made in this study to determine the missing information which are difficult to measure experimentally. The boundaries of the liquid miscibility gap at high temperatures are determined using a pyrometer. The temperature of the liquid ⇌ (Ag) + (Ni) eutectic reaction is measured using differential thermal analysis. Tie-lines of the Ag-Ni system at 1023 K and 1473 K are measured using a conventional metallurgical method. The enthalpy of mixing of the liquid at 1773 K and the (Ag,Ni) at 973 K is calculated by molecular dynamics simulation using a large-scale atomic/molecular massively parallel simulator. These results along with literature information are used to model the Gibbs energy of the liquid and (Ag,Ni) by a calculation of phase diagrams approach, and the Ag-Ni phase diagram is then calculated.
Research of Litchi Diseases Diagnosis Expertsystem Based on Rbr and Cbr
NASA Astrophysics Data System (ADS)
Xu, Bing; Liu, Liqun
To conquer the bottleneck problems existing in the traditional rule-based reasoning diseases diagnosis system, such as low reasoning efficiency and lack of flexibility, etc.. It researched the integrated case-based reasoning (CBR) and rule-based reasoning (RBR) technology, and put forward a litchi diseases diagnosis expert system (LDDES) with integrated reasoning method. The method use data mining and knowledge obtaining technology to establish knowledge base and case library. It adopt rules to instruct the retrieval and matching for CBR, and use association rule and decision trees algorithm to calculate case similarity.The experiment shows that the method can increase the system's flexibility and reasoning ability, and improve the accuracy of litchi diseases diagnosis.
Performance Analysis of Stirling Engine-Driven Vapor Compression Heat Pump System
NASA Astrophysics Data System (ADS)
Kagawa, Noboru
Stirling engine-driven vapor compression systems have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration which can play an important role in alleviating environmental and energy problems. This paper introduces a design method for the systems based on reliable mathematical methods for Stirling and Rankin cycles using reliable thermophysical information for refrigerants. The model deals with a combination of a kinematic Stirling engine and a scroll compressor. Some experimental coefficients are used to formulate the model. The obtained results show the performance behavior in detail. The measured performance of the actual system coincides with the calculated results. Furthermore, the calculated results clarify the performance using alternative refrigerants for R-22.
Three-dimensional polarization algebra for all polarization sensitive optical systems.
Li, Yahong; Fu, Yuegang; Liu, Zhiying; Zhou, Jianhong; Bryanston-Cross, P J; Li, Yan; He, Wenjun
2018-05-28
Using three-dimensional (3D) coherency vector (9 × 1), we develop a new 3D polarization algebra to calculate the polarization properties of all polarization sensitive optical systems, especially when the incident optical field is partially polarized or un-polarized. The polarization properties of a high numerical aperture (NA) microscope objective (NA = 1.25 immersed in oil) are analyzed based on the proposed 3D polarization algebra. Correspondingly, the polarization simulation of this high NA optical system is performed by the commercial software VirtualLAB Fusion. By comparing the theoretical calculations with polarization simulations, a perfect matching relation is obtained, which demonstrates that this 3D polarization algebra is valid to quantify the 3D polarization properties for all polarization sensitive optical systems.
Information pricing based on trusted system
NASA Astrophysics Data System (ADS)
Liu, Zehua; Zhang, Nan; Han, Hongfeng
2018-05-01
Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.
Customer loads of two-wheeled vehicles
NASA Astrophysics Data System (ADS)
Gorges, C.; Öztürk, K.; Liebich, R.
2017-12-01
Customer usage profiles are the most unknown influences in vehicle design targets and they play an important role in durability analysis. This publication presents a customer load acquisition system for two-wheeled vehicles that utilises the vehicle's onboard signals. A road slope estimator was developed to reveal the unknown slope resistance force with the help of a linear Kalman filter. Furthermore, an automated mass estimator was developed to consider the correct vehicle loading. The mass estimation is performed by an extended Kalman filter. Finally, a model-based wheel force calculation was derived, which is based on the superposition of forces calculated from measured onboard signals. The calculated wheel forces were validated by measurements with wheel-load transducers through the comparison of rainflow matrices. The calculated wheel forces correspond with the measured wheel forces in terms of both quality and quantity. The proposed methods can be used to gather field data for improved vehicle design loads.
An Novel Continuation Power Flow Method Based on Line Voltage Stability Index
NASA Astrophysics Data System (ADS)
Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan
2018-01-01
An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.
Bioregenerative life support system for a lunar base
NASA Astrophysics Data System (ADS)
Liu, H.; Wang, J.; Manukovsky, N. S.; Kovalev, V. S.; Gurevich, Yu. L.
We have studied a modular approach to construction of bioregenerative life support system BLSS for a lunar base using soil-like substrate SLS for plant cultivation Calculations of massflow rates in BLSS were based mostly on a vegetarian diet and biological conversion of plant residues in SLS Plant candidate list for lunar BLSS includes the following basic species rice Oryza sativa soy Glycine max sweet potato Ipomoea batatas and wheat Triticum aestivum To reduce the time necessary for transition of the system to steady state we suggest that the first seeding and sprouting could be made on Earth
Recruitment recommendation system based on fuzzy measure and indeterminate integral
NASA Astrophysics Data System (ADS)
Yin, Xin; Song, Jinjie
2017-08-01
In this study, we propose a comprehensive evaluation approach based on indeterminate integral. By introducing the related concepts of indeterminate integral and their formulas into the recruitment recommendation system, we can calculate the suitability of each job for different applicants with the defined importance for each criterion listed in the job advertisements, the association between different criteria and subjective assessment as the prerequisite. Thus we can make recommendations to the applicants based on the score of the suitability of each job from high to low. In the end, we will exemplify the usefulness and practicality of this system with samples.
The method of a joint intraday security check system based on cloud computing
NASA Astrophysics Data System (ADS)
Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng
2017-01-01
The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.
The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool
Stephen, Cook; Benjamin, Longo-Mbenza
2013-01-01
AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097
Funding California Schools: The Revenue Limit System. Technical Appendices
ERIC Educational Resources Information Center
Weston, Margaret
2010-01-01
This document presents the technical appendices accompanying the report, "Funding California Schools: The Revenue Limit System." Included are: (1) Revenue Limit Calculation and Decomposition; (2) Data and Methods; and (3) Base Funding Alternative Simulation Results. (Contains 5 tables and 26 footnotes.) [For the main report,…
Accounting for Teamwork: A Critical Study of Group-Based Systems of Organizational Control.
ERIC Educational Resources Information Center
Ezzamel, Mahmoud; Willmott, Hugh
1998-01-01
Examines the role of accounting calculations in reorganizing manufacturing capabilities of a vertically integrated global retailing company. Introducing teamwork to replace line work extended traditional, hierarchical management control systems. Teamwork's self-managing demands contravened workers' established sense of self-identity as…
NASA Astrophysics Data System (ADS)
Verduzco, Laura E.
The use of hydrogen as an energy carrier has the potential to decrease the amount of pollutants emitted to the atmosphere, significantly reduce our dependence on imported oil and resolve geopolitical issues related to energy consumption. The current status of hydrogen technology makes it prohibitive and financially risky for most investors to commit the money required for large-scale hydrogen production. Therefore, alternative strategies such as small and medium-scale hydrogen applications should be implemented during the early stages of the transition to the hydrogen economy in order to test potential markets and technology readiness. While many analysis tools have been built to estimate the requirements of the transition to a hydrogen economy, few have focused on small and medium-scale hydrogen production and none has paired financial with socioeconomic costs at the residential level. The computer-based tool (H2POWER) presented in this study calculates the capacity, cost and socioeconomic impact of the systems needed to meet the energy demands of a home or a community using home and neighborhood refueling units, which are systems that can provide electricity and heat to meet the energy demands of either (1) a home and automobile or (2) a cluster of homes and a number of automobiles. The financial costs of the production, processing and delivery sub-systems that conform the refueling units are calculated using cost data of existing technology and normalizing them to calculate capital and net present cost. The monetary value of the externalities (socioeconomic analysis) caused by each system is calculated by H2POWER through a statistical analysis of the cost associated to various externalities. Additionally, H2POWER calculates the financial impact of different penalties and incentives (such as net metering, low interest loans, fuel taxes, and emission penalties) on the cost of the system from the point of view of a developer and a homeowner. In order to assess the benefits and costs of hydrogen-based alternatives, H2POWER compares the financial and socioeconomic costs of home and neighborhood refueling units to a baseline of "conventional" sources of residential electricity, space heat, water heat, and vehicle fuel. The model can also calculate the "gap" between the financial cost of the technology and the environmental cost of the externalities that are generated using conventional energy sources. H2POWER is a flexible, user-friendly tool that allows the user to specify different production pathways, supplemental power sources (renewable and non-renewable), component characteristics, electricity mixes, and other analysis parameters in order to customize the results to specific projects. The model has also built-in default values for each of the input fields based on national averages, standard technology specifications and input from experts.
NASA Astrophysics Data System (ADS)
Kennedy, A. M.; Lane, J.; Ebert, M. A.
2014-03-01
Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2004-07-01
A universal formalism, which enables calculation of solvent-mediated potential (SMP) between two equal or non-equal solute particles with any shape immersed in solvent reservior consisting of atomic particle and/or polymer chain or their mixture, is proposed by importing a density functional theory externally into OZ equation systems. Only if size asymmetry of the solvent bath components is moderate, the present formalism can calculate the SMP in any complex fluids at the present development stage of statistical mechanics, and therefore avoids all of limitations of previous approaches for SMP. Preliminary calculation indicates the reliability of the present formalism.
NASA Astrophysics Data System (ADS)
Zolotorevskii, V. S.; Pozdnyakov, A. V.; Churyumov, A. Yu.
2012-11-01
A calculation-experimental study is carried out to improve the concept of searching for new alloying systems in order to develop new casting alloys using mathematical simulation methods in combination with thermodynamic calculations. The results show the high effectiveness of the applied methods. The real possibility of selecting the promising compositions with the required set of casting and mechanical properties is exemplified by alloys with thermally hardened Al-Cu and Al-Cu-Mg matrices, as well as poorly soluble additives that form eutectic components using mainly the calculation study methods and the minimum number of experiments.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Lin, Hui; Zhang, Qi
2018-01-01
The reference source system is a key factor to ensure the successful location of the satellite interference source. Currently, the traditional system used a mechanical rotating antenna which leaded to the disadvantages of slow rotation and high failure-rate, which seriously restricted the system’s positioning-timeliness and became its obvious weaknesses. In this paper, a multi-beam antenna scheme based on the horn array was proposed as a reference source for the satellite interference location, which was used as an alternative to the traditional reference source antenna. The new scheme has designed a small circularly polarized horn antenna as an element and proposed a multi-beamforming algorithm based on planar array. Moreover, the simulation analysis of horn antenna pattern, multi-beam forming algorithm and simulated satellite link cross-ambiguity calculation have been carried out respectively. Finally, cross-ambiguity calculation of the traditional reference source system has also been tested. The comparison between the results of computer simulation and the actual test results shows that the scheme is scientific and feasible, obviously superior to the traditional reference source system.
Generalized trajectory surface hopping method based on the Zhu-Nakamura theory
NASA Astrophysics Data System (ADS)
Oloyede, Ponmile; Mil'nikov, Gennady; Nakamura, Hiroki
2006-04-01
We present a generalized formulation of the trajectory surface hopping method applicable to a general multidimensional system. The method is based on the Zhu-Nakamura theory of a nonadiabatic transition and therefore includes the treatment of classically forbidden hops. The method uses a generalized recipe for the conservation of angular momentum after forbidden hops and an approximation for determining a nonadiabatic transition direction which is crucial when the coupling vector is unavailable. This method also eliminates the need for a rigorous location of the seam surface, thereby ensuring its applicability to a wide class of chemical systems. In a test calculation, we implement the method for the DH2+ system, and it shows a remarkable agreement with the previous results of C. Zhu, H. Kamisaka, and H. Nakamura, [J. Chem. Phys. 116, 3234 (2002)]. We then apply it to a diatomic-in-molecule model system with a conical intersection, and the results compare well with exact quantum calculations. The successful application to the conical intersection system confirms the possibility of directly extending the present method to an arbitrary potential of general topology.
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.; Kim, H.
1995-03-01
Sulfolane is widely used as a solvent for the extraction of aromatic hydrocarbons. Ternary phase equilibrium data are essential for the proper understanding of the solvent extraction process. Liquid-liquid equilibrium data for the systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene were determined at 298.15, 308.15, and 318.15 K. Tie line data were satisfactorily correlated by the Othmer and Tobias method. The experimental data were compared with the values calculated by the UNIQUAC and NRTL models. Good quantitative agreement was obtained with these models. However, the calculated values based on themore » NRTL model were found to be better than those based on the UNIQUAC model.« less
Alternative power supply systems for remote industrial customers
NASA Astrophysics Data System (ADS)
Kharlamova, N. V.; Khalyasmaa, A. I.; Eroshenko, S. A.
2017-06-01
The paper addresses the problem of alternative power supply of remote industrial clusters with renewable electric energy generation. As a result of different technologies comparison, consideration is given to wind energy application. The authors present a methodology of mean expected wind generation output calculation, based on Weibull distribution, which provides an effective express-tool for preliminary assessment of required installed generation capacity. The case study is based on real data including database of meteorological information, relief characteristics, power system topology etc. Wind generation feasibility estimation for a specific territory is followed by power flow calculations using Monte Carlo methodology. Finally, the paper provides a set of recommendations to ensure safe and reliable power supply for the final customers and, subsequently, to provide sustainable development of the regions, located far from megalopolises and industrial centres.
Modeling and analysis of the solar concentrator in photovoltaic systems
NASA Astrophysics Data System (ADS)
Mroczka, Janusz; Plachta, Kamil
2015-06-01
The paper presents the Λ-ridge and V-trough concentrator system with a low concentration ratio. Calculations and simulations have been made in the program created by the author. The results of simulation allow to choose the best parameters of photovoltaic system: the opening angle between the surface of the photovoltaic module and mirrors, resolution of the tracking system and the material for construction of the concentrator mirrors. The research shows the effect each of these parameters on the efficiency of the photovoltaic system and method of surface modeling using BRDF function. The parameters of concentrator surface (eg. surface roughness) were calculated using a new algorithm based on the BRDF function. The algorithm uses a combination of model Torrance-Sparrow and HTSG. The simulation shows the change in voltage, current and output power depending on system parameters.
Some computer graphical user interfaces in radiation therapy
Chow, James C L
2016-01-01
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations. PMID:27027225
Estimates of electronic coupling for excess electron transfer in DNA
NASA Astrophysics Data System (ADS)
Voityuk, Alexander A.
2005-07-01
Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31G* and extended 6-31++G** basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack.
Trnovec, Tomáš; Jusko, Todd A; Šovčíková, Eva; Lancz, Kinga; Chovancová, Jana; Patayová, Henrieta; Palkovičová, L'ubica; Drobná, Beata; Langer, Pavel; Van den Berg, Martin; Dedik, Ladislav; Wimmerová, Soňa
2013-08-01
Toxic equivalency factors (TEFs) are an important component in the risk assessment of dioxin-like human exposures. At present, this concept is based mainly on in vivo animal experiments using oral dosage. Consequently, the current human TEFs derived from mammalian experiments are applicable only for exposure situations in which oral ingestion occurs. Nevertheless, these "intake" TEFs are commonly-but incorrectly-used by regulatory authorities to calculate "systemic" toxic equivalents (TEQs) based on human blood and tissue concentrations, which are used as biomarkers for either exposure or effect. We sought to determine relative effect potencies (REPs) for systemic human concentrations of dioxin-like mixture components using thyroid volume or serum free thyroxine (FT4) concentration as the outcomes of interest. We used a benchmark concentration and a regression-based approach to compare the strength of association between each dioxin-like compound and the thyroid end points in 320 adults residing in an organochlorine-polluted area of eastern Slovakia. REPs calculated from thyroid volume and FT4 were similar. The regression coefficient (β)-derived REP data from thyroid volume and FT4 level were correlated with the World Health Organization (WHO) TEF values (Spearman r = 0.69, p = 0.01 and r = 0.62, p = 0.03, respectively). The calculated REPs were mostly within the minimum and maximum values for in vivo REPs derived by other investigators. Our REPs calculated from thyroid end points realistically reflect human exposure scenarios because they are based on chronic, low-dose human exposures and on biomarkers reflecting body burden. Compared with previous results, our REPs suggest higher sensitivity to the effects of dioxin-like compounds.
NASA Astrophysics Data System (ADS)
Satyanto, K. S.; Abang, Z. E.; Arif, C.; Yanuar, J. P. M.
2018-05-01
An automatic water management system for agriculture land was developed based on mini PC as controller to manage irrigation and drainage. The system was integrated with perforated pipe network installed below the soil surface to enable water flow in and out through the network, and so water table of the land can be set at a certain level. The system was operated by using solar power electricity supply to power up water level and soil moisture sensors, Raspberry Pi controller and motorized valve actuator. This study aims to implement the system in controlling water level at a soybean production land, and further to observe water footprint and carbon footprint contribution of the soybean production process with application of the automated system. The water level of the field can be controlled around 19 cm from the base. Crop water requirement was calculated using Penman-Monteith approach, with the productivity of soybean 3.57t/ha, total water footprint in soybean production is 872.01 m3/t. Carbon footprint was calculated due to the use of solar power electric supply system and during the soybean production emission was estimated equal to 1.85 kg of CO2.
Solar Absorption Refrigeration System for Air-Conditioning of a Classroom Building in Northern India
NASA Astrophysics Data System (ADS)
Agrawal, Tanmay; Varun; Kumar, Anoop
2015-10-01
Air-conditioning is a basic tool to provide human thermal comfort in a building space. The primary aim of the present work is to design an air-conditioning system based on vapour absorption cycle that utilizes a renewable energy source for its operation. The building under consideration is a classroom of dimensions 18.5 m × 13 m × 4.5 m located in Hamirpur district of Himachal Pradesh in India. For this purpose, cooling load of the building was calculated first by using cooling load temperature difference method to estimate cooling capacity of the air-conditioning system. Coefficient of performance of the refrigeration system was computed for various values of strong and weak solution concentration. In this work, a solar collector is also designed to provide required amount of heat energy by the absorption system. This heat energy is taken from solar energy which makes this system eco-friendly and sustainable. A computer program was written in MATLAB to calculate the design parameters. Results were obtained for various values of solution concentrations throughout the year. Cost analysis has also been carried out to compare absorption refrigeration system with conventional vapour compression cycle based air-conditioners.
Object detection system using SPAD proximity detectors
NASA Astrophysics Data System (ADS)
Stark, Laurence; Raynor, Jeffrey M.; Henderson, Robert K.
2011-10-01
This paper presents an object detection system based upon the use of multiple single photon avalanche diode (SPAD) proximity sensors operating upon the time-of-flight (ToF) principle, whereby the co-ordinates of a target object in a coordinate system relative to the assembly are calculated. The system is similar to a touch screen system in form and operation except that the lack of requirement of a physical sensing surface provides a novel advantage over most existing touch screen technologies. The sensors are controlled by FPGA-based firmware and each proximity sensor in the system measures the range from the sensor to the target object. A software algorithm is implemented to calculate the x-y coordinates of the target object based on the distance measurements from at least two separate sensors and the known relative positions of these sensors. Existing proximity sensors were capable of determining the distance to an object with centimetric accuracy and were modified to obtain a wide field of view in the x-y axes with low beam angle in z in order to provide a detection area as large as possible. Design and implementation of the firmware, electronic hardware, mechanics and optics are covered in the paper. Possible future work would include characterisation with alternative designs of proximity sensors, as this is the component which determines the highest achievable accur1acy of the system.
A Novel Sensor System for Measuring Wheel Loads of Vehicles on Highways
Zhang, Wenbin; Suo, Chunguang; Wang, Qi
2008-01-01
With the development of the highway transportation and business trade, vehicle Weigh-In-Motion (WIM) technology has become a key technology for measuring traffic loads. In this paper a novel WIM system based on monitoring of pavement strain responses in rigid pavement was investigated. In this WIM system multiple low cost, light weight, small volume and high accuracy embedded concrete strain sensors were used as WIM sensors to measure rigid pavement strain responses. In order to verify the feasibility of the method, a system prototype based on multiple sensors was designed and deployed on a relatively busy freeway. Field calibration and tests were performed with known two-axle truck wheel loads and the measurement errors were calculated based on the static weights measured with a static weighbridge. This enables the weights of other vehicles to be calculated from the calibration constant. Calibration and test results for individual sensors or three-sensor fusions are both provided. Repeatability, sources of error, and weight accuracy are discussed. Successful results showed that the proposed method was feasible and proven to have a high accuracy. Furthermore, a sample mean approach using multiple fused individual sensors could provide better performance compared to individual sensors. PMID:27873952
Patient‐specific CT dosimetry calculation: a feasibility study
Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.
2011-01-01
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016
FPGA Implementation of Heart Rate Monitoring System.
Panigrahy, D; Rakshit, M; Sahu, P K
2016-03-01
This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
NASA Astrophysics Data System (ADS)
Choi, Garam; Lee, Won Bo
Metal alloys, especially Al-based, are commonly-used materials for various industrial applications. In this paper, the Al-Cu alloys with varying the Al-Cu ratio were investigated based on the first-principle calculation using density functional theory. And the electronic transport properties of the Al-Cu alloys were carried out using Boltzmann transport theory. From the results, the transport properties decrease with Cu-containing ratio at the temperature from moderate to high, but with non-linearity. It is inferred by various scattering effects from the calculation results with relaxation time approximation. For the Al-Cu alloy system, where it is hard to find the reliable experimental data for various alloys, it supports understanding and expectation for the thermal electrical properties from the theoretical prediction. Theoretical and computational soft matters laboratory.
12 CFR 652.95 - Failure to meet capital requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Section 652.95 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FEDERAL AGRICULTURAL MORTGAGE CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.95 Failure to meet... your risk-based capital level calculated according to § 652.65, your minimum capital requirements...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V.
We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.
NASA Technical Reports Server (NTRS)
Prandtl, L
1924-01-01
The most important part of the resistance or drag of a wing system,the induced drag, can be calculated theoretically, when the distribution of lift on the individual wings is known. The calculation is based upon the assumption that the lift on the wings is distributed along the wing in proportion to the ordinates of a semi-ellipse. Formulas and numerical tables are given for calculating the drag. In this connection, the most favorable arrangements of biplanes and triplanes are discussed and the results are further elucidated by means of numerical examples.
NASA Astrophysics Data System (ADS)
Guillou, Sylvain; Barbry, Nathaly; Nguyen, Kim Dan
A non hydrostatic vertical two-dimensional numerical model is proposed to calculate free-surface flows. This model is based on resolving the full Navier-Stokes equations by a finite-difference method coupled with Chorin's projection method. An adaptative-Eulerian grid in the sigma-coordinate system is used. The model permits the calculation of surface-waves in estuarine and coastal zones. A benchmark test relative to the soliton propagation is realised to validate the model.
2014-12-01
from standard HSE06 hybrid functional with α = 0.25 and ω = 0.11 bohr–1 and b) from HSE with α = 0.093 and ω of 0.11 bohr–1...better agreement for the band gap value for future calculations, a systemic study was conducted for the (α, ω) parameter space of the HSE ...orthogonal). Future HSE calculations will be performed with the updated parameters. Fig. 7 Density of States of PEEK based on the optimized
NASA Astrophysics Data System (ADS)
Letsoin, Sri Murniani Angelina; Kolyaan, Yuliana; Cahyadi, Dedy
2017-02-01
The cause of maternal mortality can be divided into two, the direct cause and indirect cause. One of the indirect causes is too difficult to reach health services and the lack of pregnancy knowledge. On the other hand, Android smartphone development of communications technology has increased compared to users of other devices, e.g. blackberry, which has dropped from 11.5% to 4.8% while the android market share has grown from 46.9% up to 68.1%. This increasing is being an opportunity for the software developers to design some software based on Android. The aim of this study was to facilitate the pregnant women to find out some information about the nutritional health, abstinence, calculate gestational age and nutrition based on the period of pregnancy. The information system was designed by using UML, the Eclipse IDE with the java programming language, MySQL as the database. The testing results showed that the nutrition information system based on android could help pregnant women to obtain health nutrition information such as nutrition, calories, dietary restrictions that should be avoided during the first month to the nine month of pregnancy, and the calculation of gestation.
System Characterization Results for the QuickBird Sensor
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Ross, Kenton; Blonski, Slawomir
2007-01-01
An overall system characterization was performed on several DigitalGlobe' QuickBird image products by the NASA Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) at the John C. Stennis Space Center. This system characterization incorporated geopositional accuracy assessments, a spatial resolution assessment, and a radiometric calibration assessment. Geopositional assessments of standard georeferenced multispectral products were obtained using an array of accurately surveyed geodetic targets evenly spaced throughout a scene. Geopositional accuracy was calculated in terms of circular error. Spatial resolution of QuickBird panchromatic imagery was characterized based on edge response measurements using edge targets and the tilted-edge technique. Relative edge response was estimated as a geometric mean of normalized edge response differences measured in two directions of image pixels at points distanced from the edge by -0.5 and 0.5 of ground sample distance. A reflectance-based vicarious calibration approach, based on ground-based measurements and radiative transfer calculations, was used to estimate at-sensor radiance. These values were compared to those measured by the sensor to determine the sensor's radiometric accuracy. All imagery analyzed was acquired between fall 2005 and spring 2006. These characterization results were compared to previous years' results to identify any temporal drifts or trends.
Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.
Stępień, Grzegorz
2018-03-17
The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.
First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix
NASA Astrophysics Data System (ADS)
Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo
2008-10-01
The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.
Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system
NASA Technical Reports Server (NTRS)
1974-01-01
A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.
NASA Astrophysics Data System (ADS)
Hoi, Bui Dinh; Davoudiniya, Masoumeh; Yarmohammadi, Mohsen
2018-04-01
Based on theoretically tight-binding calculations considering nearest neighbors and Green's function technique, we show that the magnetic phase transition in both semiconducting and metallic armchair graphene nanoribbons with width ranging from 9.83 Å to 69.3 Å would be observed in the presence of injecting electrons by doping. This transition is explained by the temperature-dependent static charge susceptibility through calculation of the correlation function of charge density operators. This work showed that charge concentration of dopants in such system plays a crucial role in determining the magnetic phase. A variety of multicritical points such as transition temperatures and maximum susceptibility are compared in undoped and doped cases. Our findings show that there exist two different transition temperatures and maximum susceptibility depending on the ribbon width in doped structures. Another remarkable point refers to the invalidity (validity) of the Fermi liquid theory in nanoribbons-based systems at weak (strong) concentration of dopants. The obtained interesting results of magnetic phase transition in such system create a new potential for magnetic graphene nanoribbon-based devices.
Nag, Sudip; Kale, Nitin S; Rao, V; Sharma, Dinesh K
2009-01-01
Piezoresistive micro-cantilevers are interesting bio-sensing tool whose base resistance value (R) changes by a few parts per million (DeltaR) in deflected conditions. Measuring such a small deviation is always being a challenge due to noise. An advanced and reliable DeltaR/R measurement scheme is presented in this paper which can sense resistance changes down to 6 parts per million. The measurement scheme includes the half-bridge connected micro-cantilevers with mismatch compensation, precision op-amp based filters and amplifiers, and a lock-in amplifier based detector. The input actuating sine wave is applied from a function generator and the output dc voltage is displayed on a digital multimeter. The calibration is performed and instrument sensitivity is calculated. An experimental set-up using a probe station is discussed that demonstrates a combined performance of the measurement system and SU8-polysilicon cantilevers. The deflection sensitivity of such polymeric cantilevers is calculated. The system will be highly useful to detect bio-markers such as myoglobin and troponin that are released in blood during or after heart attacks.
Radiation Field Forming for Industrial Electron Accelerators Using Rare-Earth Magnetic Materials
NASA Astrophysics Data System (ADS)
Ermakov, A. N.; Khankin, V. V.; Shvedunov, N. V.; Shvedunov, V. I.; Yurov, D. S.
2016-09-01
The article describes the radiation field forming system for industrial electron accelerators, which would have uniform distribution of linear charge density at the surface of an item being irradiated perpendicular to the direction of its motion. Its main element is non-linear quadrupole lens made with the use of rare-earth magnetic materials. The proposed system has a number of advantages over traditional beam scanning systems that use electromagnets, including easier product irradiation planning, lower instantaneous local dose rate, smaller size, lower cost. Provided are the calculation results for a 10 MeV industrial electron accelerator, as well as measurement results for current distribution in the prototype build based on calculations.
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
Nathenson, Manuel
1984-01-01
The amount of thermal energy in high-temperature geothermal systems (>150 degree C) in the United States has been calculated by estimating the temperature, area, and thickness of each identified system. These data, along with a general model for recoverability of geothermal energy and a calculation that takes account of the conversion of thermal energy to electricity, yield a resource estimate of 23,000 MWe for 30 years. The undiscovered component was estimated based on multipliers of the identified resource as either 72,000 or 127,000 MWe for 30 years depending on the model chosen for the distribution of undiscovered energy as a function of temperature.
Adaptive imaging through far-field turbulence
NASA Astrophysics Data System (ADS)
Troxel, Steven E.; Welsh, Byron M.; Roggemann, Michael C.
1993-11-01
This paper presents a new method for calculating the field angle dependent average OTF of an adaptive optic system and compares this method to calculations based on geometric optics. Geometric optics calculations are shown to be inaccurate due to the diffraction effects created by far-field turbulence and the approximations made in the atmospheric parameters. Our analysis includes diffraction effects and properly accounts for the effect of the atmospheric turbulence scale sizes. We show that for any atmospheric C(superscript 2)(subscript n) profile, the actual OTF is always better than the OTF calculated using geometric optics. The magnitude of the difference between the calculation methods is shown to be dependent on the amount of far- field turbulence and the values of the outer scale dimension.
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Using Financial Incentives to Motivate Staff: A Program that Works.
ERIC Educational Resources Information Center
Calhoun, A. Brian; Lestina, Ray
1986-01-01
Explains Triton College's incentive/bonus system used to promote the involvement and retention of Employee Development Institute staff. The six-step system involves determining departmental profit, establishing minimum profit figures and bonus base, calculating the bonus pool, determining individual bonus shares, adding special programing bonuses,…
The Use of Magnetoencephalography in Evaluating Human Performance
1991-06-01
determines the head cartesian coordinate system, and calculates the locations of the dipole sets in this reference frame. This system is based on an optical ...differences in brain activity are found between imagers and nonimagers , the brain areas which seem to be involved will be localized. 25 3. The poor
The New Southern FIA Data Compilation System
V. Clark Baldwin; Larry Royer
2001-01-01
In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...
An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.
Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B
2017-01-01
An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), <3% dose-difference or 3 mm distance-to-agreement (DTA) was found. Moreover, considering the anthropomorphic phantom, compared to the Mapcheck dose measurements, <5% dose-difference or 5 mm DTA was observed. Fast calculation speed and high accuracy of GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.
Uranium phase diagram from first principles
NASA Astrophysics Data System (ADS)
Yanilkin, Alexey; Kruglov, Ivan; Migdal, Kirill; Oganov, Artem; Pokatashkin, Pavel; Sergeev, Oleg
2017-06-01
The work is devoted to the investigation of uranium phase diagram up to pressure of 1 TPa and temperature of 15 kK based on density functional theory. First of all the comparison of pseudopotential and full potential calculations is carried out for different uranium phases. In the second step, phase diagram at zero temperature is investigated by means of program USPEX and pseudopotential calculations. Stable and metastable structures with close energies are selected. In order to obtain phase diagram at finite temperatures the preliminary selection of stable phases is made by free energy calculation based on small displacement method. For remaining candidates the accurate values of free energy are obtained by means of thermodynamic integration method (TIM). For this purpose quantum molecular dynamics are carried out at different volumes and temperatures. Interatomic potentials based machine learning are developed in order to consider large systems and long times for TIM. The potentials reproduce the free energy with the accuracy 1-5 meV/atom, which is sufficient for prediction of phase transitions. The equilibrium curves of different phases are obtained based on free energies. Melting curve is calculated by modified Z-method with developed potential.
Validation of the CME Geomagnetic forecast alerts under COMESEP alert system
NASA Astrophysics Data System (ADS)
Dumbovic, Mateja; Srivastava, Nandita; Khodia, Yamini; Vršnak, Bojan; Devos, Andy; Rodriguez, Luciano
2017-04-01
An automated space weather alert system has been developed under the EU FP7 project COMESEP (COronal Mass Ejections and Solar Energetic Particles: http://comesep.aeronomy.be) to forecast solar energetic particles (SEP) and coronal mass ejection (CME) risk levels at Earth. COMESEP alert system uses automated detection tool CACTus to detect potentially threatening CMEs, drag-based model (DBM) to predict their arrival and CME geo-effectiveness tool (CGFT) to predict their geomagnetic impact. Whenever CACTus detects a halo or partial halo CME and issues an alert, DBM calculates its arrival time at Earth and CGFT calculates its geomagnetic risk level. Geomagnetic risk level is calculated based on an estimation of the CME arrival probability and its likely geo-effectiveness, as well as an estimate of the geomagnetic-storm duration. We present the evaluation of the CME risk level forecast with COMESEP alert system based on a study of geo-effective CMEs observed during 2014. The validation of the forecast tool is done by comparing the forecasts with observations. In addition, we test the success rate of the automatic forecasts (without human intervention) against the forecasts with human intervention using advanced versions of DBM and CGFT (self standing tools available at Hvar Observatory website: http://oh.geof.unizg.hr). The results implicate that the success rate of the forecast is higher with human intervention and using more advanced tools. This work has received funding from the European Commission FP7 Project COMESEP (263252). We acknowledge the support of Croatian Science Foundation under the project 6212 „Solar and Stellar Variability".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y; Mazur, T; Green, O
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.
Li, Xin; Wang, Jian; Liu, Chunyan
2015-09-25
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System
Li, Xin; Wang, Jian; Liu, Chunyan
2015-01-01
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision. PMID:26404277
Modulation of electronic structures of bases through DNA recognition of protein.
Hagiwara, Yohsuke; Kino, Hiori; Tateno, Masaru
2010-04-21
The effects of environmental structures on the electronic states of functional regions in a fully solvated DNA·protein complex were investigated using combined ab initio quantum mechanics/molecular mechanics calculations. A complex of a transcriptional factor, PU.1, and the target DNA was used for the calculations. The effects of solvent on the energies of molecular orbitals (MOs) of some DNA bases strongly correlate with the magnitude of masking of the DNA bases from the solvent by the protein. In the complex, PU.1 causes a variation in the magnitude among DNA bases by means of directly recognizing the DNA bases through hydrogen bonds and inducing structural changes of the DNA structure from the canonical one. Thus, the strong correlation found in this study is the first evidence showing the close quantitative relationship between recognition modes of DNA bases and the energy levels of the corresponding MOs. Thus, it has been revealed that the electronic state of each base is highly regulated and organized by the DNA recognition of the protein. Other biological macromolecular systems can be expected to also possess similar modulation mechanisms, suggesting that this finding provides a novel basis for the understanding for the regulation functions of biological macromolecular systems.
NASA Astrophysics Data System (ADS)
Maskaeva, L. N.; Fedorova, E. A.; Yusupov, R. A.; Markov, V. F.
2018-05-01
The potentiometric titration of tin chloride SnCl2 is performed in the concentration range of 0.00009-1.1 mol/L with a solution of sodium hydroxide NaOH. According to potentiometric titration data based on modeling equilibria in the SnCl2-H2O-NaOH system, basic equations are generated for the main processes, and instability constants are calculated for the resulting hydroxo complexes and equilibrium constants of low-soluble tin(II) compounds. The data will be of interest for specialists in the field of theory of solutions.
Unified Description of Inelastic Propensity Rules for Electron Transport through Nanoscale Junctions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Ueba, Hiromu; Lorente, Nicolás; Brandbyge, Mads
2008-06-01
We present a method to analyze the results of first-principles based calculations of electronic currents including inelastic electron-phonon effects. This method allows us to determine the electronic and vibrational symmetries in play, and hence to obtain the so-called propensity rules for the studied systems. We show that only a few scattering states—namely those belonging to the most transmitting eigenchannels—need to be considered for a complete description of the electron transport. We apply the method on first-principles calculations of four different systems and obtain the propensity rules in each case.
Precise calculation of the local pressure tensor in Cartesian and spherical coordinates in LAMMPS
NASA Astrophysics Data System (ADS)
Nakamura, Takenobu; Kawamoto, Shuhei; Shinoda, Wataru
2015-05-01
An accurate and efficient algorithm for calculating the 3D pressure field has been developed and implemented in the open-source molecular dynamics package, LAMMPS. Additionally, an algorithm to compute the pressure profile along the radial direction in spherical coordinates has also been implemented. The latter is particularly useful for systems showing a spherical symmetry such as micelles and vesicles. These methods yield precise pressure fields based on the Irving-Kirkwood contour integration and are particularly useful for biomolecular force fields. The present methods are applied to several systems including a buckled membrane and a vesicle.
Nexus: A modular workflow management system for quantum simulation codes
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.
2016-01-01
The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.
Pfeiffer, Florian; Rauhut, Guntram
2011-10-13
Accurate anharmonic frequencies are provided for molecules of current research, i.e., diazirines, diazomethane, the corresponding fluorinated and deuterated compounds, their dioxygen analogs, and others. Vibrational-state energies were obtained from state-specific vibrational multiconfiguration self-consistent field theory (VMCSCF) based on multilevel potential energy surfaces (PES) generated from explicitly correlated coupled cluster, CCSD(T)-F12a, and double-hybrid density functional calculations, B2PLYP. To accelerate the vibrational structure calculations, a configuration selection scheme as well as a polynomial representation of the PES have been exploited. Because experimental data are scarce for these systems, many calculated frequencies of this study are predictions and may guide experiments to come.
Efficient calculation of luminance variation of a luminaire that uses LED light sources
NASA Astrophysics Data System (ADS)
Goldstein, Peter
2007-09-01
Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.
Impact Ignition and Combustion Behavior of Amorphous Metal-Based Reactive Composites
NASA Astrophysics Data System (ADS)
Mason, Benjamin; Groven, Lori; Son, Steven
2013-06-01
Recently published molecular dynamic simulations have shown that metal-based reactive powder composites consisting of at least one amorphous component could lead to improved reaction performance due to amorphous materials having a zero heat of fusion, in addition to having high energy densities and potential uses such as structural energetic materials and enhanced blast materials. In order to investigate the feasibility of these systems, thermochemical equilibrium calculations were performed on various amorphous metal/metalloid based reactive systems with an emphasis on commercially available or easily manufactured amorphous metals, such as Zr and Ti based amorphous alloys in combination with carbon, boron, and aluminum. Based on the calculations and material availability material combinations were chosen. Initial materials were either mixed via a Resodyn mixer or mechanically activated using high energy ball milling where the microstructure of the milled material was characterized using x-ray diffraction, optical microscopy and scanning electron microscopy. The mechanical impact response and combustion behavior of select reactive systems was characterized using the Asay shear impact experiment where impact ignition thresholds, ignition delays, combustion velocities, and temperatures were quantified, and reported. Funding from the Defense Threat Reduction Agency (DTRA), Grant Number HDTRA1-10-1-0119. Counter-WMD basic research program, Dr. Suhithi M. Peiris, program director is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Petersen, Philippe A D; Silva, Andreia S; Gonçalves, Marcos B; Lapolli, André L; Ferreira, Ana Maria C; Carbonari, Artur W; Petrilli, Helena M
2014-06-03
In this work, perturbed angular correlation (PAC) spectroscopy is used to study differences in the nuclear quadrupole interactions of Cd probes in DNA molecules of mice infected with the Y-strain of Trypanosoma cruzi. The possibility of investigating the local genetic alterations in DNA, which occur along generations of mice infected with T. cruzi, using hyperfine interactions obtained from PAC measurements and density functional theory (DFT) calculations in DNA bases is discussed. A comparison of DFT calculations with PAC measurements could determine the type of Cd coordination in the studied molecules. To the best of our knowledge, this is the first attempt to use DFT calculations and PAC measurements to investigate the local environment of Cd ions bound to DNA bases in mice infected with Chagas disease. The obtained results also allowed the detection of local changes occurring in the DNA molecules of different generations of mice infected with T. cruzi, opening the possibility of using this technique as a complementary tool in the characterization of complicated biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.
2010-12-07
This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mmmore » were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.« less
PID Controller Settings Based on a Transient Response Experiment
ERIC Educational Resources Information Center
Silva, Carlos M.; Lito, Patricia F.; Neves, Patricia S.; Da Silva, Francisco A.
2008-01-01
An experimental work on controller tuning for chemical engineering undergraduate students is proposed using a small heat exchange unit. Based upon process reaction curves in open-loop configuration, system gain and time constant are determined for first order model with time delay with excellent accuracy. Afterwards students calculate PID…
47 CFR 80.385 - Frequencies for automated systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... a secondary, non-interference basis by amateur stations participating in digital message forwarding... protection will be provided to a site-based licensee's predicted 38 dBu signal level contour. The site-based licensee's predicted 38 dBu signal level contour shall be calculated using the F(50, 50) field strength...
47 CFR 80.385 - Frequencies for automated systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... a secondary, non-interference basis by amateur stations participating in digital message forwarding... protection will be provided to a site-based licensee's predicted 38 dBu signal level contour. The site-based licensee's predicted 38 dBu signal level contour shall be calculated using the F(50, 50) field strength...
47 CFR 80.385 - Frequencies for automated systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... a secondary, non-interference basis by amateur stations participating in digital message forwarding... protection will be provided to a site-based licensee's predicted 38 dBu signal level contour. The site-based licensee's predicted 38 dBu signal level contour shall be calculated using the F(50, 50) field strength...
Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D; Goertzen, Andrew L
2013-12-07
In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal design can provide better sampling density than SL or dual-layer no-offset system designs with the same total crystal length. The results of the image reconstruction with SRFs modeling for phantom studies exhibit promising image recovery capability for crystal widths of 1.27-1.43 mm and top/bottom layer lengths of 4/6 mm. In conclusion, we have developed efficient algorithms for system response modeling of our proposed PET insert with DLO crystal arrays. This provides an effective method for both 3D computer simulation and quantitative image reconstruction, and will aid in the optimization of our PET insert system with various crystal designs.
NASA Technical Reports Server (NTRS)
Bateman, Monte; Mach, Douglas; Blakeslee, Richard J.; Koshak, William
2018-01-01
As part of the calibration/validation (cal/val) effort for the Geostationary Lightning Mapper (GLM) on GOES-16, we need to assess instrument performance (detection efficiency and accuracy). One major effort is to calculate the detection efficiency of GLM by comparing to multiple ground-based systems. These comparisons will be done pair-wise between GLM and each other source. A complication in this process is that the ground-based systems sense different properties of the lightning signal than does GLM (e.g., RF vs. optical). Also, each system has a different time and space resolution and accuracy. Preliminary results indicate that GLM is performing at or above its specification.