Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
Image segmentation algorithm based on improved PCNN
NASA Astrophysics Data System (ADS)
Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui
2017-11-01
A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Simplified power control method for cellular mobile communication
NASA Astrophysics Data System (ADS)
Leung, Y. W.
1994-04-01
The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, J. L.
1982-01-01
A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
Optical chirp z-transform processor with a simplified architecture.
Ngo, Nam Quoc
2014-12-29
Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.
Failure mode and effects analysis: a comparison of two common risk prioritisation methods.
McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L
2016-05-01
Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Liu, Peide
2017-07-01
Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai
2016-01-01
This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Study on a pattern classification method of soil quality based on simplified learning sample dataset
Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.
2011-01-01
Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.
Simplified method for numerical modeling of fiber lasers.
Shtyrina, O V; Yarutkina, I A; Fedoruk, M P
2014-12-29
A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert E.
2015-12-08
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert B.
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.
Ye, Jun
2015-03-01
In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
An improved loopless mounting method for cryocrystallography
NASA Astrophysics Data System (ADS)
Qi, Jian-Xun; Jiang, Fan
2010-01-01
Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.
77 FR 54482 - Allocation of Costs Under the Simplified Methods
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...
Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary
NASA Astrophysics Data System (ADS)
Liu, Wuying; Wang, Lin
2018-03-01
The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Cheng; Ding, Dazhi, E-mail: dzding@njust.edu.cn; Fan, Zhenhong
2015-03-15
A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gasmore » chamber.« less
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
A simplified dynamic model of the T700 turboshaft engine
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.
1992-01-01
A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.
Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.
He, A; Deepan, B; Quan, C
2017-09-01
A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.
Lorenzo, C F; Hartley, T T; Malti, R
2013-05-13
A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
Simplifying HL7 Version 3 messages.
Worden, Robert; Scott, Philip
2011-01-01
HL7 Version 3 offers a semantically robust method for healthcare interoperability but has been criticized as overly complex to implement. This paper reviews initiatives to simplify HL7 Version 3 messaging and presents a novel approach based on semantic mapping. Based on user-defined definitions, precise transforms between simple and full messages are automatically generated. Systems can be interfaced with the simple messages and achieve interoperability with full Version 3 messages through the transforms. This reduces the costs of HL7 interfacing and will encourage better uptake of HL7 Version 3 and CDA.
Rodríguez-Sánchez, Belén; Marín, Mercedes; Sánchez-Carrillo, Carlos; Cercenado, Emilia; Ruiz, Adrián; Rodríguez-Créixems, Marta; Bouza, Emilio
2014-05-01
This study evaluates matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) capability for the identification of difficult-to-identify microorganisms. A total of 150 bacterial isolates inconclusively identified with conventional phenotypic tests were further assessed by 16S rRNA sequencing and by MALDI-TOF MS following 2 methods: a) a simplified formic acid-based, on-plate extraction and b) performing a tube-based extraction step. Using the simplified method, 29 isolates could not be identified. For the remaining 121 isolates (80.7%), we obtained a reliable identification by MALDI-TOF: in 103 isolates, the identification by 16S rRNA sequencing and MALDI TOF coincided at the species level (68.7% from the total 150 analyzed isolates and 85.1% from the samples with MALDI-TOF result), and in 18 isolates, the identification by both methods coincided at the genus level (12% from the total and 14.9% from the samples with MALDI-TOF results). No discordant results were observed. The performance of the tube-based extraction step allowed the identification at the species level of 6 of the 29 unidentified isolates by the simplified method. In summary, MALDI-TOF can be used for the rapid identification of many bacterial isolates inconclusively identified by conventional methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
A simplified real time method to forecast semi-enclosed basins storm surge
NASA Astrophysics Data System (ADS)
Pasquali, D.; Di Risio, M.; De Girolamo, P.
2015-11-01
Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Yang, W. M.; Wang, Y.; Wu, J.
2017-08-01
In this work, an immersed boundary-simplified sphere function-based gas kinetic scheme (SGKS) is presented for the simulation of 3D incompressible flows with curved and moving boundaries. At first, the SGKS [Yang et al., "A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows," J. Comput. Phys. 295, 322 (2015) and Yang et al., "Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows," J. Comput. Phys. 319, 129 (2016)], which is often applied for the simulation of compressible flows, is simplified to improve the computational efficiency for the simulation of incompressible flows. In the original SGKS, the integral domain along the spherical surface for computing conservative variables and numerical fluxes is usually not symmetric at the cell interface. This leads the expression of numerical fluxes at the cell interface to be relatively complicated. For incompressible flows, the sphere at the cell interface can be approximately considered to be symmetric as shown in this work. Besides that, the energy equation is usually not needed for the simulation of incompressible isothermal flows. With all these simplifications, the simple and explicit formulations for the conservative variables and numerical fluxes at the cell interface can be obtained. Second, to effectively implement the no-slip boundary condition for fluid flow problems with complex geometry as well as moving boundary, the implicit boundary condition-enforced immersed boundary method [Wu and Shu, "Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications," J. Comput. Phys. 228, 1963 (2009)] is introduced into the simplified SGKS. That is, the flow field is solved by the simplified SGKS without considering the presence of an immersed body and the no-slip boundary condition is implemented by the immersed boundary method. The accuracy and efficiency of the present scheme are validated by simulating the decaying vortex flow, flow past a stationary and rotating sphere, flow past a stationary torus, and flows over dragonfly flight.
Godoy, Antonio; Siegel, Sharon C
2015-12-01
Mandibular implant-retained overdentures have become the standard of care for patients with mandibular complete edentulism. As part of the treatment, the mandibular implant-retained overdenture may require a metal mesh framework to be incorporated to strengthen the denture and avoid fracture of the prosthesis. Integrating the metal mesh framework as part of the acrylic record base and wax occlusion rim before the jaw relation procedure will avoid the distortion of the record base and will minimize the chances of processing errors. A simplified method to incorporate the mesh into the record base and occlusion rim is presented in this technique article. © 2015 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
Zhang, Hua-qing; Sun, Xi-ping; Wang, Yuan-zhan; Yin, Ji-long; Wang, Chao-yang
2015-10-01
There has been a growing trend in the development of offshore deep-water ports in China. For such deep sea projects, all-vertical-piled wharves are suitable structures and generally located in open waters, greatly affected by wave action. Currently, no systematic studies or simplified numerical methods are available for deriving the dynamic characteristics and dynamic responses of all-vertical-piled wharves under wave cyclic loads. In this article, we compare the dynamic characteristics of an all-vertical-piled wharf with those of a traditional inshore high-piled wharf through numerical analysis; our research reveals that the vibration period of an all-vertical-piled wharf under cyclic loading is longer than that of an inshore high-piled wharf and is much closer to the period of the loading wave. Therefore, dynamic calculation and analysis should be conducted when designing and calculating the characteristics of an all-vertical-piled wharf. We establish a dynamic finite element model to examine the dynamic response of an all-vertical-piled wharf under wave cyclic loads and compare the results with those under wave equivalent static load; the comparison indicates that dynamic amplification of the structure is evident when the wave dynamic load effect is taken into account. Furthermore, a simplified dynamic numerical method for calculating the dynamic response of an all-vertical-piled wharf is established based on the P-Y curve. Compared with finite element analysis, the simplified method is more convenient to use and applicable to large structural deformation while considering the soil non-linearity. We confirmed that the simplified method has acceptable accuracy and can be used in engineering applications.
A simplified method for assessing particle deposition rate in aircraft cabins
NASA Astrophysics Data System (ADS)
You, Ruoyu; Zhao, Bin
2013-03-01
Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.
The Uncertainty of Mass Discharge Measurements Using Pumping Methods Under Simplified Conditions
Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures use...
NASA Technical Reports Server (NTRS)
Gracey, William
1948-01-01
A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)
Larrain, Felipe A.; Fuentes-Hernandez, Canek; Chou, Wen-Fang; ...
2018-01-01
A solution-based method to electrically p-dope organic semiconductors enabling the fabrication of organic solar cells with simplified geometry is implemented with acetonitrile as an alternative to nitromethane.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larrain, Felipe A.; Fuentes-Hernandez, Canek; Chou, Wen-Fang
A solution-based method to electrically p-dope organic semiconductors enabling the fabrication of organic solar cells with simplified geometry is implemented with acetonitrile as an alternative to nitromethane.
Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures
NASA Astrophysics Data System (ADS)
Balagopal, R.; Prasad Rao, N.; Rokade, R. P.
2018-04-01
Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.
Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing
2017-01-01
Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405
Simplified web-based decision support method for traffic management and work zone analysis.
DOT National Transportation Integrated Search
2017-01-01
Traffic congestion mitigation is one of the key challenges that transportation planners and operations engineers face when planning for construction and maintenance activities. There is a wide variety of approaches and methods that address work zone ...
Simplified web-based decision support method for traffic management and work zone analysis.
DOT National Transportation Integrated Search
2015-06-01
Traffic congestion mitigation is one of the key challenges that transportation planners and operations engineers face when : planning for construction and maintenance activities. There is a wide variety of approaches and methods that address work : z...
Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation
De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan
2017-01-01
In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436
Simplified model of mean double step (MDS) in human body movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando
In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.
Shaukat, Shahzad; Angez, Mehar; Alam, Muhammad Masroor; Jebbink, Maarten F; Deijs, Martin; Canuti, Marta; Sharif, Salmaan; de Vries, Michel; Khurshid, Adnan; Mahmood, Tariq; van der Hoek, Lia; Zaidi, Syed Sohail Zahoor
2014-08-12
The use of sequence independent methods combined with next generation sequencing for identification purposes in clinical samples appears promising and exciting results have been achieved to understand unexplained infections. One sequence independent method, Virus Discovery based on cDNA Amplified Fragment Length Polymorphism (VIDISCA) is capable of identifying viruses that would have remained unidentified in standard diagnostics or cell cultures. VIDISCA is normally combined with next generation sequencing, however, we set up a simplified VIDISCA which can be used in case next generation sequencing is not possible. Stool samples of 10 patients with unexplained acute flaccid paralysis showing cytopathic effect in rhabdomyosarcoma cells and/or mouse cells were used to test the efficiency of this method. To further characterize the viruses, VIDISCA-positive samples were amplified and sequenced with gene specific primers. Simplified VIDISCA detected seven viruses (70%) and the proportion of eukaryotic viral sequences from each sample ranged from 8.3 to 45.8%. Human enterovirus EV-B97, EV-B100, echovirus-9 and echovirus-21, human parechovirus type-3, human astrovirus probably a type-3/5 recombinant, and tetnovirus-1 were identified. Phylogenetic analysis based on the VP1 region demonstrated that the human enteroviruses are more divergent isolates circulating in the community. Our data support that a simplified VIDISCA protocol can efficiently identify unrecognized viruses grown in cell culture with low cost, limited time without need of advanced technical expertise. Also complex data interpretation is avoided thus the method can be used as a powerful diagnostic tool in limited resources. Redesigning the routine diagnostics might lead to additional detection of previously undiagnosed viruses in clinical samples of patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
Simplifier: a web tool to eliminate redundant NGS contigs.
Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur
2012-01-01
Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.
Estimating surface temperature in forced convection nucleate boiling - A simplified method
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Papell, S. S.
1977-01-01
A simplified expression to estimate surface temperatures in forced convection boiling was developed using a liquid nitrogen data base. Using the principal of corresponding states and the Kutateladze relation for maximum pool boiling heat flux, the expression was normalized for use with other fluids. The expression was applied also to neon and water. For the neon data base, the agreement was acceptable with the exclusion of one set suspected to be in the transition boiling regime. For the water data base at reduced pressure greater than 0.05 the agreement is generally good. At lower reduced pressures, the water data scatter and the calculated temperature becomes a function of flow rate.
NASA Astrophysics Data System (ADS)
Vimmr, Jan; Bublík, Ondřej; Prausová, Helena; Hála, Jindřich; Pešek, Luděk
2018-06-01
This paper deals with a numerical simulation of compressible viscous fluid flow around three flat plates with prescribed harmonic motion. This arrangement presents a simplified blade cascade with forward wave motion. The aim of this simulation is to determine the aerodynamic forces acting on the flat plates. The mathematical model describing this problem is formed by Favre-averaged system of Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) formulation completed by one-equation Spalart-Allmaras turbulence model. The simulation was performed using the developed in-house CFD software based on discontinuous Galerkin method, which offers high order of accuracy.
Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan
2016-01-01
In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.
Simplified welding distortion analysis for fillet welding using composite shell elements
NASA Astrophysics Data System (ADS)
Kim, Mingyu; Kang, Minseok; Chung, Hyun
2015-09-01
This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.
ERIC Educational Resources Information Center
Levesque, Luc
2012-01-01
A method is proposed to simplify analytical computations of the transfer function for electrical circuit filters, which are made from repetitive identical stages. A method based on the construction of Pascal's triangle is introduced and then a general solution from two initial conditions is provided for the repetitive identical stage. The present…
Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
Abel's Theorem Simplifies Reduction of Order
ERIC Educational Resources Information Center
Green, William R.
2011-01-01
We give an alternative to the standard method of reduction or order, in which one uses one solution of a homogeneous, linear, second order differential equation to find a second, linearly independent solution. Our method, based on Abel's Theorem, is shorter, less complex and extends to higher order equations.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Yarrington, Phillip W.
2007-01-01
The simplified shear solution method is presented for approximating the through-thickness shear stress distribution within a composite laminate based on laminated beam theory. The method does not consider the solution of a particular boundary value problem, rather it requires only knowledge of the global shear loading, geometry, and material properties of the laminate or panel. It is thus analogous to lamination theory in that ply level stresses can be efficiently determined from global load resultants (as determined, for instance, by finite element analysis) at a given location in a structure and used to evaluate the margin of safety on a ply by ply basis. The simplified shear solution stress distribution is zero at free surfaces, continuous at ply boundaries, and integrates to the applied shear load. Comparisons to existing theories are made for a variety of laminates, and design examples are provided illustrating the use of the method for determining through-thickness shear stress margins in several types of composite panels and in the context of a finite element structural analysis.
Interpretation of searches for supersymmetry with simplified models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less
Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1999-01-01
Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.
Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.
Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W
1994-01-01
Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184
NASA Astrophysics Data System (ADS)
You, Xu; Zhi-jian, Zong; Qun, Gao
2018-07-01
This paper describes a methodology for the position uncertainty distribution of an articulated arm coordinate measuring machine (AACMM). First, a model of the structural parameter uncertainties was established by statistical method. Second, the position uncertainty space volume of the AACMM in a certain configuration was expressed using a simplified definite integration method based on the structural parameter uncertainties; it was then used to evaluate the position accuracy of the AACMM in a certain configuration. Third, the configurations of a certain working point were calculated by an inverse solution, and the position uncertainty distribution of a certain working point was determined; working point uncertainty can be evaluated by the weighting method. Lastly, the position uncertainty distribution in the workspace of the ACCMM was described by a map. A single-point contrast test of a 6-joint AACMM was carried out to verify the effectiveness of the proposed method, and it was shown that the method can describe the position uncertainty of the AACMM and it was used to guide the calibration of the AACMM and the choice of AACMM’s accuracy area.
76 FR 15887 - Sales-Based Royalties and Vendor Allowances; Hearing
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-22
... production method and the simplified resale method of allocating capitalized costs between ending inventory...., Washington, DC. Alternatively, taxpayers may submit electronic outlines of oral comments via the Federal e... Register on Friday, December 17, 2010 (75 FR 78940). Persons, who wish to present oral comments at the...
Simplification of the Kalman filter for meteorological data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1991-01-01
The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.
A simplified digital lock-in amplifier for the scanning grating spectrometer.
Wang, Jingru; Wang, Zhihong; Ji, Xufei; Liu, Jie; Liu, Guangda
2017-02-01
For the common measurement and control system of a scanning grating spectrometer, the use of an analog lock-in amplifier requires complex circuitry and sophisticated debugging, whereas the use of a digital lock-in amplifier places a high demand on the calculation capability and storage space. In this paper, a simplified digital lock-in amplifier based on averaging the absolute values within a complete period is presented and applied to a scanning grating spectrometer. The simplified digital lock-in amplifier was implemented on a low-cost microcontroller without multipliers, and got rid of the reference signal and specific configuration of the sampling frequency. Two positive zero-crossing detections were used to lock the phase of the measured signal. However, measurement method errors were introduced by the following factors: frequency fluctuation, sampling interval, and integer restriction of the sampling number. The theoretical calculation and experimental results of the signal-to-noise ratio of the proposed measurement method were 2055 and 2403, respectively.
Simplified methods for computing total sediment discharge with the modified Einstein procedure
Colby, Bruce R.; Hubbell, David Wellington
1961-01-01
A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
Blood oxygen saturation determined by transmission spectrophotometry of hemolyzed blood samples
NASA Technical Reports Server (NTRS)
Malik, W. M.
1967-01-01
Use of the Lambert-Beer Transmission Law determines blood oxygen saturation of hemolyzed blood samples. This simplified method is based on the difference in optical absorption properties of hemoglobin and oxyhemoglobin.
Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.
Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena
2012-02-01
We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.
BASEFLOW SEPARATION BASED ON ANALYTICAL SOLUTIONS OF THE BOUSSINESQ EQUATION. (R824995)
A technique for baseflow separation is presented based on similarity solutions of the Boussinesq equation. The method makes use of the simplifying assumptions that a horizontal impermeable layer underlies a Dupuit aquifer which is drained by a fully penetratin...
Digital Games: Changing Education, One Raid at a Time
ERIC Educational Resources Information Center
Pivec, Paul; Pivec, Maja
2011-01-01
Digital Games are becoming a new form of interactive content and game playing provides an interactive and collaborative platform for learning purposes. Collaborative learning allows participants to produce new ideas as well as to exchange information, simplify problems, and resolve the tasks. Context based collaborative learning method is based on…
A simplified model of all-sky artificial sky glow derived from VIIRS Day/Night band data
NASA Astrophysics Data System (ADS)
Duriscoe, Dan M.; Anderson, Sharolyn J.; Luginbuhl, Christian B.; Baugh, Kimberly E.
2018-07-01
We present a simplified method using geographic analysis tools to predict the average artificial luminance over the hemisphere of the night sky, expressed as a ratio to the natural condition. The VIIRS Day/Night Band upward radiance data from the Suomi NPP orbiting satellite was used for input to the model. The method is based upon a relation between sky glow brightness and the distance from the observer to the source of upward radiance. This relationship was developed using a Garstang radiative transfer model with Day/Night Band data as input, then refined and calibrated with ground-based all-sky V-band photometric data taken under cloudless and low atmospheric aerosol conditions. An excellent correlation was found between observed sky quality and the predicted values from the remotely sensed data. Thematic maps of large regions of the earth showing predicted artificial V-band sky brightness may be quickly generated with modest computing resources. We have found a fast and accurate method based on previous work to model all-sky quality. We provide limitations to this method. The proposed model meets requirements needed by decision makers and land managers of an easy to interpret and understand metric of sky quality.
Simplified method for calculating shear deflections of beams.
I. Orosz
1970-01-01
When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 13.305-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.305-4... purchase requisition, contracting officer verification statement, or other agency approved method of...
Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd
2013-01-17
Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Automated Simplification of Full Chemical Mechanisms
NASA Technical Reports Server (NTRS)
Norris, A. T.
1997-01-01
A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.
Lajolo, Carlo; Giuliani, Michele; Cordaro, Massimo; Marigo, Luca; Marcelli, Antonio; Fiorillo, Fabio; Pascali, Vincenzo L; Oliva, Antonio
2013-10-01
Chronological age (CA) plays a fundamental role in forensic dentistry (i.e. personal identification and evaluation of imputability). Even though several studies outlined the association between biological and chronological age, there is still great variability in the estimates. The aim of this study was to determine the possible correlation between biological and CA age through the use of two new radiographic indexes (Oro-Cervical Radiographic Simplified Score - OCRSS and Oro-Cervical Radiographic Simplified Score Without Wisdom Teeth - OCRSSWWT) that are based on the oro-cervical area. Sixty Italian Caucasian individuals were divided into 3 groups according to their CA: Group 1: CAG 1 = 8-14 yr; Group 2: CAG 2 = 14-18 yr; Group 3: CAG 3 = 18-25 yr; panorexes and standardised cephalograms were evaluated according Demirjian's Method for dental age calculation (DM), Cervical Vertebral Maturation method for skeletal age calculation (CVMS) and Third Molar Development for age estimation (TMD). The stages of each method were simplified in order to generate OCRSS, which summarized the simplified scores of the three methods, and OCRSSWWT, which summarized the simplified DM and CVMS scores. There was a significant correlation between OCRSS and CAGs (Slope = 0.954, p < 0.001, R-squared = 0.79) and between OCRSSWWT and CAGs (Slope = 0.863, p < 0.001, R-squared = 0.776). Even though the indexes, especially OCRSS, appear to be highly reliable, growth variability among individuals can deeply influence the anatomical changes from childhood to adulthood. A multi-disciplinary approach that considers many different biomarkers could help make radiological age determination more reliable when it is used to predict CA. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.
2017-06-01
Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.
A simplified model for dynamics of cell rolling and cell-surface adhesion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cimrák, Ivan, E-mail: ivan.cimrak@fri.uniza.sk
2015-03-10
We propose a three dimensional model for the adhesion and rolling of biological cells on surfaces. We study cells moving in shear flow above a wall to which they can adhere via specific receptor-ligand bonds based on receptors from selectin as well as integrin family. The computational fluid dynamics are governed by the lattice-Boltzmann method. The movement and the deformation of the cells is described by the immersed boundary method. Both methods are fully coupled by implementing a two-way fluid-structure interaction. The adhesion mechanism is modelled by adhesive bonds including stochastic rules for their creation and rupture. We explore amore » simplified model with dissociation rate independent of the length of the bonds. We demonstrate that this model is able to resemble the mesoscopic properties, such as velocity of rolling cells.« less
A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid
2004-01-01
Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.
Earplug rankings based on the protector-attenuation rating (P-AR).
DOT National Transportation Integrated Search
1975-01-01
Forty-five attenuation spectra for earplugs were classified according to a simplified method designed to produce single-number ratings of noise reduction. The rating procedure was applied to the mean attenuation scores, to mean-minus-one-standard-dev...
Thin film solar cell configuration and fabrication method
Menezes, Shalini
2009-07-14
A new photovoltaic device configuration based on an n-copper indium selenide absorber and a p-type window is disclosed. A fabrication method to produce this device on flexible or rigid substrates is described that reduces the number of cell components, avoids hazardous materials, simplifies the process steps and hence the costs for high volume solar cell manufacturing.
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-05-14
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-01-01
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS’s solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method. PMID:29757983
NASA Technical Reports Server (NTRS)
Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.
1990-01-01
The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
NASA Astrophysics Data System (ADS)
Shiota, Koki; Kai, Kazuho; Nagaoka, Shiro; Tsuji, Takuto; Wakahara, Akihiro; Rusop, Mohamad
2016-07-01
The educational method which is including designing, making, and evaluating actual semiconductor devices with learning the theory is one of the best way to obtain the fundamental understanding of the device physics and to cultivate the ability to make unique ideas using the knowledge in the semiconductor device. In this paper, the simplified Boron thermal diffusion process using Sol-Gel material under normal air environment was proposed based on simple hypothesis and the feasibility of the reproducibility and reliability were investigated to simplify the diffusion process for making the educational devices, such as p-n junction, bipolar and pMOS devices. As the result, this method was successfully achieved making p+ region on the surface of the n-type silicon substrates with good reproducibility. And good rectification property of the p-n junctions was obtained successfully. This result indicates that there is a possibility to apply on the process making pMOS or bipolar transistors. It suggests that there is a variety of the possibility of the applications in the educational field to foster an imagination of new devices.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
Coach simplified structure modeling and optimization study based on the PBM method
NASA Astrophysics Data System (ADS)
Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian
2016-09-01
For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.
Morishita, Y
2001-05-01
The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.
Notification: Methods for Procuring Supplies and Services Under Simplified Acquisition Procedures
Project #OA-FY15-0193, June 18, 2015. The EPA OIG plans to begin the preliminary research phase of auditing the methods used in procuring supplies and services under simplified acquisition procedures.
Charting a path forward: policy analysis of China's evolved DRG-based hospital payment system
Liu, Rui; Shi, Jianwei; Yang, Beilei; Jin, Chunlin; Sun, Pengfei; Wu, Lingfang; Yu, Dehua; Xiong, Linping; Wang, Zhaoxin
2017-01-01
Abstract Background At present, the diagnosis-related groups-based prospective payment system (DRG-PPS) that has been implemented in China is merely a prototype called the simplified DRG-PPS, which is known as the ‘ceiling price for a single disease’. Given that studies on the effects of a simplified DRG-PPS in China have usually been controversial, we aim to synthesize evidence examining whether DRGs can reduce medical costs and length of stay (LOS) in China. Methods Data were searched from both Chinese [Wan Fang and China National Knowledge Infrastructure Database (CNKI)] and international databases (Web of Science and PubMed), as well as the official websites of Chinese health departments in the 2004–2016 period. Only studies with a design that included both experimental (with DRG-PPS implementation) and control groups (without DRG-PPS implementation) were included in the review. Results The studies were based on inpatient samples from public hospitals distributed in 12 provinces of mainland China. Among them, 80.95% (17/21) revealed that hospitalization costs could be reduced significantly, and 50.00% (8/16) indicated that length of stay could be decreased significantly. In addition, the government reports showed the enormous differences in pricing standards and LOS in various provinces, even for the same disease. Conclusions We conclude that the simplified DRGs are useful in controlling hospitalization costs, but they fail to reduce LOS. Much work remains to be done in China to improve the simplified DRG-PPS. PMID:28911128
Approximate relations and charts for low-speed stability derivatives of swept wings
NASA Technical Reports Server (NTRS)
Toll, Thomas A; Queijo, M J
1948-01-01
Contains derivations, based on a simplified theory, of approximate relations for low-speed stability derivatives of swept wings. Method accounts for the effects and, in most cases, taper ratio. Charts, based on the derived relations, are presented for the stability derivatives of untapered swept wings. Calculated values of the derivatives are compared with experimental results.
Reduction method with system analysis for multiobjective optimization-based design
NASA Technical Reports Server (NTRS)
Azarm, S.; Sobieszczanski-Sobieski, J.
1993-01-01
An approach for reducing the number of variables and constraints, which is combined with System Analysis Equations (SAE), for multiobjective optimization-based design is presented. In order to develop a simplified analysis model, the SAE is computed outside an optimization loop and then approximated for use by an operator. Two examples are presented to demonstrate the approach.
Preliminary orbit determination for lunar satellites.
NASA Technical Reports Server (NTRS)
Lancaster, E. R.
1973-01-01
Methods for the determination of orbits of artificial lunar satellites from earth-based range rate measurements developed by Koskela (1964) and Bateman et al. (1966) are simplified and extended to include range measurements along with range rate measurements. For illustration, a numerical example is presented.
Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu
2004-01-01
We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.
Incompressible Navier-Stokes Computations with Heat Transfer
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)
1994-01-01
The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.
Multigrid methods for numerical simulation of laminar diffusion flames
NASA Technical Reports Server (NTRS)
Liu, C.; Liu, Z.; Mccormick, S.
1993-01-01
This paper documents the result of a computational study of multigrid methods for numerical simulation of 2D diffusion flames. The focus is on a simplified combustion model, which is assumed to be a single step, infinitely fast and irreversible chemical reaction with five species (C3H8, O2, N2, CO2 and H2O). A fully-implicit second-order hybrid scheme is developed on a staggered grid, which is stretched in the streamwise coordinate direction. A full approximation multigrid scheme (FAS) based on line distributive relaxation is developed as a fast solver for the algebraic equations arising at each time step. Convergence of the process for the simplified model problem is more than two-orders of magnitude faster than other iterative methods, and the computational results show good grid convergence, with second-order accuracy, as well as qualitatively agreement with the results of other researchers.
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2018-03-01
We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Stimmel, G. L.
1971-01-01
A simplified method is described for determining the position of the lunar roving vehicle on the lunar surface during Apollo 15. The method is based upon sun compass azimuth measurements of three lunar landmarks. The difference between the landmark azimuth and the sun azimuth is measured and the resulting data are voice relayed to the Mission Control Center for processing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false [Reserved] 13.304 Section 13.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.304 [Reserved] ...
A Simplified Approach to Risk Assessment Based on System Dynamics: An Industrial Case Study.
Garbolino, Emmanuel; Chery, Jean-Pierre; Guarnieri, Franck
2016-01-01
Seveso plants are complex sociotechnical systems, which makes it appropriate to support any risk assessment with a model of the system. However, more often than not, this step is only partially addressed, simplified, or avoided in safety reports. At the same time, investigations have shown that the complexity of industrial systems is frequently a factor in accidents, due to interactions between their technical, human, and organizational dimensions. In order to handle both this complexity and changes in the system over time, this article proposes an original and simplified qualitative risk evaluation method based on the system dynamics theory developed by Forrester in the early 1960s. The methodology supports the development of a dynamic risk assessment framework dedicated to industrial activities. It consists of 10 complementary steps grouped into two main activities: system dynamics modeling of the sociotechnical system and risk analysis. This system dynamics risk analysis is applied to a case study of a chemical plant and provides a way to assess the technological and organizational components of safety. © 2016 Society for Risk Analysis.
Template-Based Geometric Simulation of Flexible Frameworks
Wells, Stephen A.; Sartbaeva, Asel
2012-01-01
Specialised modelling and simulation methods implementing simplified physical models are valuable generators of insight. Template-based geometric simulation is a specialised method for modelling flexible framework structures made up of rigid units. We review the background, development and implementation of the method, and its applications to the study of framework materials such as zeolites and perovskites. The “flexibility window” property of zeolite frameworks is a particularly significant discovery made using geometric simulation. Software implementing geometric simulation of framework materials, “GASP”, is freely available to researchers. PMID:28817055
26 CFR 1.199-4 - Costs allocable to domestic production gross receipts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... using the simplified deduction method. Paragraph (f) of this section provides a small business... taxpayer for internal management or other business purposes; whether the method is used for other Federal... than a taxpayer that uses the small business simplified overall method of paragraph (f) of this section...
Amperometric Carbon Fiber Nitrite Microsensor for In Situ Biofilm Monitoring
A highly selective needle type solid state amperometric nitrite microsensor based on direct nitrite oxidation on carbon fiber was developed using a simplified fabrication method. The microsensor’s tip diameter was approximately 7 µm, providing a high spatial resolution of at lea...
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
48 CFR 13.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Purchase orders. 13.302 Section 13.302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.302 Purchase...
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
Simplified Dynamic Analysis of Grinders Spindle Node
NASA Astrophysics Data System (ADS)
Demec, Peter
2014-12-01
The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Ines; Schillig, Cora
A double-sided adhesive metal-based tape for use as contacting aid for SOFC fuel cells is provided. The double-sided metal-based adhesive tape is suitable for simplifying the construction of cell bundles. The double-sided metal-based adhesive tape is used for electrical contacting of the cell connector with the anode and for electrical contacting of the interconnector of the fuel cells with the cell connector. A method for producing the double-sided adhesive metal-base tape is also provided.
48 CFR 1313.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 1313.302 Section 1313.302 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisitions Methods 1313.302 Purchase orders. ...
48 CFR 813.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 813.302 Section 813.302 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 813.302 Purchase...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
Simplified Microarray Technique for Identifying mRNA in Rare Samples
NASA Technical Reports Server (NTRS)
Almeida, Eduardo; Kadambi, Geeta
2007-01-01
Two simplified methods of identifying messenger ribonucleic acid (mRNA), and compact, low-power apparatuses to implement the methods, are at the proof-of-concept stage of development. These methods are related to traditional methods based on hybridization of nucleic acid, but whereas the traditional methods must be practiced in laboratory settings, these methods could be practiced in field settings. Hybridization of nucleic acid is a powerful technique for detection of specific complementary nucleic acid sequences, and is increasingly being used for detection of changes in gene expression in microarrays containing thousands of gene probes. A traditional microarray study entails at least the following six steps: 1. Purification of cellular RNA, 2. Amplification of complementary deoxyribonucleic acid [cDNA] by polymerase chain reaction (PCR), 3. Labeling of cDNA with fluorophores of Cy3 (a green cyanine dye) and Cy5 (a red cyanine dye), 4. Hybridization to a microarray chip, 5. Fluorescence scanning the array(s) with dual excitation wavelengths, and 6. Analysis of the resulting images. This six-step procedure must be performed in a laboratory because it requires bulky equipment.
A 2005 biomass burning (wildfire, prescribed, and agricultural) emission inventory has been developed for the contiguous United States using a newly developed simplified method of combining information from multiple sources for use in the US EPA’s national Emission Inventory (NEI...
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
NASA Astrophysics Data System (ADS)
Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing
2007-06-01
Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kydonieos, M; Folgueras, A; Florescu, L
2016-06-15
Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems,more » Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning approaches. Good agreement is obtained between iViewDose (simplified approach) and the independent measurement tool. This research is funded by Elekta Limited.« less
Simplified procedure for computing the absorption of sound by the atmosphere
DOT National Transportation Integrated Search
2007-10-31
This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...
Using the surface panel method to predict the steady performance of ducted propellers
NASA Astrophysics Data System (ADS)
Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long
2009-12-01
A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.
Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja
2015-01-01
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706
A Simplified Guidance for Target Missiles Used in Ballistic Missile Defence Evaluation
NASA Astrophysics Data System (ADS)
Prabhakar, N.; Kumar, I. D.; Tata, S. K.; Vaithiyanathan, V.
2013-01-01
A simplified guidance scheme for the target missiles used in Ballistic Missile Defence is presented in this paper. The proposed method has two major components, a Ground Guidance Computation (GGC) and an In-Flight Guidance Computation. The GGC which runs on the ground uses a missile model to generate attitude history in pitch plane and computes launch azimuth of the missile to compensate for the effect of earth rotation. The vehicle follows the pre launch computed attitude (theta) history in pitch plane and also applies the course correction in azimuth plane based on its deviation from the pre launch computed azimuth plane. This scheme requires less computations and counters In-flight disturbances such as wind, gust etc. quite efficiently. The simulation results show that the proposed method provides the satisfactory performance and robustness.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiota, Koki, E-mail: a14510@sr.kagawa-nct.ac.jp; Kai, Kazuho; Nagaoka, Shiro, E-mail: nagaoka@es.kagawa-nct.ac.jp
The educational method which is including designing, making, and evaluating actual semiconductor devices with learning the theory is one of the best way to obtain the fundamental understanding of the device physics and to cultivate the ability to make unique ideas using the knowledge in the semiconductor device. In this paper, the simplified Boron thermal diffusion process using Sol-Gel material under normal air environment was proposed based on simple hypothesis and the feasibility of the reproducibility and reliability were investigated to simplify the diffusion process for making the educational devices, such as p-n junction, bipolar and pMOS devices. As themore » result, this method was successfully achieved making p+ region on the surface of the n-type silicon substrates with good reproducibility. And good rectification property of the p-n junctions was obtained successfully. This result indicates that there is a possibility to apply on the process making pMOS or bipolar transistors. It suggests that there is a variety of the possibility of the applications in the educational field to foster an imagination of new devices.« less
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
A Simplified Diagnostic Method for Elastomer Bond Durability
NASA Technical Reports Server (NTRS)
White, Paul
2009-01-01
A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.
A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.
ERIC Educational Resources Information Center
Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.
This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model
2009-08-01
has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System
Lu, Hongzhi; Xu, Shoufang
2017-06-15
Construction of ratiometric fluorescent probe often involved in tedious multistep preparation or complicated coupling or chemical modification process. The emergence of dual emission fluorescent nanoparticles would simplify the construction process and avoids the tedious chemical coupling. Herein, we reported a facile strategy to prepare ratiometric fluorescence molecularly imprinted sensor based on dual emission nanoparticles (d-NPs) which comprised of carbon dots and gold nanoclusters for detection of Bisphenol A (BPA). D-NPs emission at 460nm and 580nm were first prepared by seed growth co-microwave method using gold nanoparticles as seeds and glucose as precursor for carbon dots. When they were applied to propose ratiometric fluorescence molecularly imprinted sensor, the preparation process was simplified, and the sensitivity of sensor was improved with detection limit of 29nM, and visualizing BPA was feasible based on the distinguish fluorescence color change. The feasibility of the developed method in real samples was successfully evaluated through the analysis of BPA in water samples with satisfactory recoveries of 95.9-98.9% and recoveries ranging from 92.6% to 98.6% in canned food samples. When detection BPA in positive feeding bottles, the results agree well with those obtained by accredited method. The developed method proposed in this work to prepare ratiometric fluorescence molecularly imprinted sensor based on dual emission nanoparticles proved to be a convenient, reliable and practical way to prepared high sensitive and selective fluorescence sensors. Copyright © 2017 Elsevier B.V. All rights reserved.
Canavese, F; Charles, Y P; Dimeglio, A; Schuller, S; Rousset, M; Samba, A; Pereira, B; Steib, J-P
2014-11-01
Assessment of skeletal age is important in children's orthopaedics. We compared two simplified methods used in the assessment of skeletal age. Both methods have been described previously with one based on the appearance of the epiphysis at the olecranon and the other on the digital epiphyses. We also investigated the influence of assessor experience on applying these two methods. Our investigation was based on the anteroposterior left hand and lateral elbow radiographs of 44 boys (mean: 14.4; 12.4 to 16.1 ) and 78 girls (mean: 13.0; 11.1 to14.9) obtained during the pubertal growth spurt. A total of nine observers examined the radiographs with the observers assigned to three groups based on their experience (experienced, intermediate and novice). These raters were required to determined skeletal ages twice at six-week intervals. The correlation between the two methods was determined per assessment and per observer groups. Interclass correlation coefficients (ICC) evaluated the reproducibility of the two methods. The overall correlation between the two methods was r = 0.83 for boys and r = 0.84 for girls. The correlation was equal between first and second assessment, and between the observer groups (r ≥ 0.82). There was an equally strong ICC for the assessment effect (ICC ≤ 0.4%) and observer effect (ICC ≤ 3%) for each method. There was no significant (p < 0.05) difference between the levels of experience. The two methods are equally reliable in assessing skeletal maturity. The olecranon method offers detailed information during the pubertal growth spurt, while the digital method is as accurate but less detailed, making it more useful after the pubertal growth spurt once the olecranon has ossified. ©2014 The British Editorial Society of Bone & Joint Surgery.
Wang, Zhi-Min; Liu, Ju-Yan; Liu, Xiao-Qian; Wang, De-Qin; Yan, Li-Hua; Zhu, Jin-Jin; Gao, Hui-Min; Li, Chun; Wang, Jin-Yu; Li, Chu-Yuan; Ni, Qing-Chun; Huang, Ji-Sheng; Lin, Juan
2017-05-01
As an outstanding representative of traditional Chinese medicine(TCM) prescriptions accumulated from famous TCM doctors' clinical experiences in past dynasties, classical TCM excellent prescriptions (cTCMeP) are the most valuable part of TCM system. To support the research and development of cTCMeP, a series of regulations and measures were issued to encourage its simplified registration. There is still a long-way to go because many key problems and puzzles about technology, registration and administration in cTCMeP R&D process are not resolved. Based on the analysis of registration and management regulations of botanical drug products in FDA of USA and Japan, and EMA of Europe, the possible key problems and countermeasures in chemistry, manufacture and control (CMC) of simplified registration of cTCMeP were analyzed on the consideration of its actual situation. The method of "reference decoction extract by traditional prescription" (RDETP) was firstly proposed as standard to evaluate the quality and preparation uniformity between the new developing product under simplified registration and traditional original usages of cTCMeP, instead of Standard Decoction method in Japan. "Totality of the evidence" approach, mass balance and bioassay/biological assay of cTCMeP were emphatically suggested to introduce to the quality uniformity evaluation system in the raw drug material, drug substance and final product between the modern product and traditional decoction. Copyright© by the Chinese Pharmaceutical Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Edwards, R.M.; Lee, K.Y.
1997-03-01
In this paper, a simplified model with a lower order is first developed for a nuclear steam generator system and verified against some realistic environments. Based on this simplified model, a hybrid multi-input and multi-out (MIMO) control system, consisting of feedforward control (FFC) and feedback control (FBC), is designed for wide range conditions by using the genetic algorithm (GA) technique. The FFC control, obtained by the GA optimization method, injects an a priori command input into the system to achieve an optimal performance for the designed system, while the GA-based FBC control provides the necessary compensation for any disturbances ormore » uncertainties in a real steam generator. The FBC control is an optimal design of a PI-based control system which would be more acceptable for industrial practices and power plant control system upgrades. The designed hybrid MIMO FFC/FBC control system is first applied to the simplified model and then to a more complicated model with a higher order which is used as a substitute of the real system to test the efficacy of the designed control system. Results from computer simulations show that the designed GA-based hybrid MIMO FFC/FBC control can achieve good responses and robust performances. Hence, it can be considered as a viable alternative to the current control system upgrade.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P.J.
1996-07-01
A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact tool, final verification of the dewar pressure vessel design requires a complete, detailed real fluid compressible flow model of the vent stack. The wall heat flux resulting from a loss of vacuum insulation increases the dewar pressure, which actuates the pressure relief mechanism and results in high-speed flow through the dewar vent stack. At high pressures, the flow can be choked at the vent stack inlet, at the exit, or at an intermediate transition or restriction. During previous SOFIA analyses, it was observed that there was generally a readily identifiable section of the vent stack that would limit the flow – e.g., a small diameter entrance or an orifice. It was also found that when the supercritical helium was approximated as an ideal gas at the dewar condition, the calculated mass flow rate based on choking at the limiting entrance or transition was less than the mass flow rate calculated using the detailed real fluid model2. Using this lower mass flow rate would yield a conservative prediction of the dewar’s wall heat flux capability. The simplified method of the current work was developed by building on this observation.
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Adopting exergy analysis for use in aerospace
NASA Astrophysics Data System (ADS)
Hayes, David; Lone, Mudassir; Whidborne, James F.; Camberos, José; Coetzee, Etienne
2017-08-01
Thermodynamic analysis methods, based on an exergy metric, have been developed to improve system efficiency of traditional heat driven systems such as ground based power plants and aircraft propulsion systems. However, in more recent years interest in the topic has broadened to include applying these second law methods to the field of aerodynamics and complete aerospace vehicles. Work to date is based on highly simplified structures, but such a method could be shown to have benefit to the highly conservative and risk averse commercial aerospace sector. This review justifies how thermodynamic exergy analysis has the potential to facilitate a breakthrough in the optimization of aerospace vehicles based on a system of energy systems, through studying the exergy-based multidisciplinary design of future flight vehicles.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Spectroscopy by joint spectral and time domain optical coherence tomography
NASA Astrophysics Data System (ADS)
Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej
2015-03-01
We present the methodology for spectroscopic examination of absorbing media being the combination of Spectral Optical Coherence Tomography and Fourier Transform Spectroscopy. The method bases on the joint Spectral and Time OCT computational scheme and simplifies data analysis procedure as compared to the mostly used windowing-based Spectroscopic OCT methods. The proposed experimental setup is self-calibrating in terms of wavelength-pixel assignment. The performance of the method in measuring absorption spectrum was checked with the use of the reflecting phantom filled with the absorbing agent (indocyanine green). The results show quantitative accordance with the controlled exact results provided by the reference method.
NASA Astrophysics Data System (ADS)
de Carvalho, Fábio Romeu; Abe, Jair Minoro
2010-11-01
Two recent non-classical logics have been used to make decision: fuzzy logic and paraconsistent annotated evidential logic Et. In this paper we present a simplified version of the fuzzy decision method and its comparison with the paraconsistent one. Paraconsistent annotated evidential logic Et, introduced by Da Costa, Vago and Subrahmanian (1991), is capable of handling uncertain and contradictory data without becoming trivial. It has been used in many applications such as information technology, robotics, artificial intelligence, production engineering, decision making etc. Intuitively, one Et logic formula is type p(a, b), in which a and b belong to [0, 1] (real interval) and represent respectively the degree of favorable evidence (or degree of belief) and the degree of contrary evidence (or degree of disbelief) found in p. The set of all pairs (a; b), called annotations, when plotted, form the Cartesian Unitary Square (CUS). This set, containing a similar order relation of real number, comprises a network, called lattice of the annotations. Fuzzy logic was introduced by Zadeh (1965). It tries to systematize the knowledge study, searching mainly to study the fuzzy knowledge (you don't know what it means) and distinguish it from the imprecise one (you know what it means, but you don't know its exact value). This logic is similar to paraconsistent annotated one, since it attributes a numeric value (only one, not two values) to each proposition (then we can say that it is an one-valued logic). This number translates the intensity (the degree) with which the preposition is true. Let's X a set and A, a subset of X, identified by the function f(x). For each element x∈X, you have y = f(x)∈[0, 1]. The number y is called degree of pertinence of x in A. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods, like the one based on Statistics. In this paper we present a first study for a simplified version of decision making theory based on Fuzzy Logic (SVMFD) and a comparison with the Paraconsistent Decision Method (PDM) based on Paraconsistent Annotated Evidential Logic Eτ, already presented and summarized in this paper. An example showing the two methods is presented in the paper, as well as a comparison between them.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Saini, Subhash (Technical Monitor)
1999-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.
Feature point based 3D tracking of multiple fish from multi-view images
Qian, Zhi-Ming
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966
Feature point based 3D tracking of multiple fish from multi-view images.
Qian, Zhi-Ming; Chen, Yan Qiu
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.
Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.
Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin
2015-08-21
Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.
77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [REG-126770-06] RIN 1545-BG07 Allocation of Costs Under the Simplified Methods; Hearing AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of public hearing on notice proposed rulemaking. SUMMARY: This document provides notice of...
CO2 Biofixation and Growth Kinetics of Chlorella vulgaris and Nannochloropsis gaditana.
Adamczyk, Michał; Lasek, Janusz; Skawińska, Agnieszka
2016-08-01
CO2 biofixation was investigated using tubular bioreactors (15 and 1.5 l) either in the presence of green algae Chlorella vulgaris or Nannochloropsis gaditana. The cultivation was carried out in the following conditions: temperature of 25 °C, inlet-CO2 of 4 and 8 vol%, and artificial light enhancing photosynthesis. Higher biofixation were observed in 8 vol% CO2 concentration for both microalgae cultures than in 4 vol%. Characteristic process parameters such as productivity, CO2 fixation, and kinetic rate coefficient were determined and discussed. Simplified and advanced methods for determination of CO2 fixation were compared. In a simplified method, it is assumed that 1 kg of produced biomass equals 1.88 kg recycled CO2. Advance method is based on empirical results of the present study (formula with carbon content in biomass). It was observed that application of the simplified method can generate large errors, especially if the biomass contains a relatively low amount of carbon. N. gaditana is the recommended species for CO2 removal due to a high biofixation rate-more than 1.7 g/l/day. On day 10 of cultivation, the cell concentration was more than 1.7 × 10(7) cells/ml. In the case of C. vulgaris, the maximal biofixation rate and cell concentration did not exceed 1.4 g/l/day and 1.3 × 10(7) cells/ml, respectively.
A highly sensitive and versatile virus titration assay in the 96-well microplate format.
Borisevich, V; Nistler, R; Hudman, D; Yamshchikov, G; Seregin, A; Yamshchikov, V
2008-02-01
This report describes a fast, reproducible, inexpensive and convenient assay system for virus titration in the 96-well format. The micromethod substantially increases assay throughput and improves the data reproducibility. A highly simplified variant of virus quantification is based on immunohistochemical detection of virus amplification foci obtained without use of agarose or semisolid overlays. It can be incorporated into several types of routine virological assays successfully replacing the laborious and time-consuming conventional methods based on plaque formation under semisolid overlays. The method does not depend on the development of CPE and can be accommodated to assay viruses with substantial differences in growth properties. The use of enhanced immunohistochemical detection enabled a five- to six-fold reduction of the total assay time. The micromethod was specifically developed to take advantage of multichannel pipettor use to simplify handling of a large number of samples. The method performs well with an inexpensive low-power binocular, thus offering a routine assay system usable outside of specialized laboratory setting, such as for testing of clinical or field samples. When used in focus reduction-neutralization tests (FRNT), the method accommodates very small volumes of immune serum, which is often a decisive factor in experiments involving small rodent models.
Propulsive efficiency of frog swimming with different feet and swimming patterns
Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu
2017-01-01
ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669
NASA Astrophysics Data System (ADS)
Yeom, Jong-Min; Han, Kyung-Soo; Kim, Jae-Jin
2012-05-01
Solar surface insolation (SSI) represents how much solar radiance reaches the Earth's surface in a specified area and is an important parameter in various fields such as surface energy research, meteorology, and climate change. This study calculates insolation using Multi-functional Transport Satellite (MTSAT-1R) data with a simplified cloud factor over Northeast Asia. For SSI retrieval from the geostationary satellite data, the physical model of Kawamura is modified to improve insolation estimation by considering various atmospheric constituents, such as Rayleigh scattering, water vapor, ozone, aerosols, and clouds. For more accurate atmospheric parameterization, satellite-based atmospheric constituents are used instead of constant values when estimating insolation. Cloud effects are a key problem in insolation estimation because of their complicated optical characteristics and high temporal and spatial variation. The accuracy of insolation data from satellites depends on how well cloud attenuation as a function of geostationary channels and angle can be inferred. This study uses a simplified cloud factor that depends on the reflectance and solar zenith angle. Empirical criteria to select reference data for fitting to the ground station data are applied to suggest simplified cloud factor methods. Insolation estimated using the cloud factor is compared with results of the unmodified physical model and with observations by ground-based pyranometers located in the Korean peninsula. The modified model results show far better agreement with ground truth data compared to estimates using the conventional method under overcast conditions.
Ye, Linqi; Zong, Qun; Tian, Bailing; Zhang, Xiuyun; Wang, Fang
2017-09-01
In this paper, the nonminimum phase problem of a flexible hypersonic vehicle is investigated. The main challenge of nonminimum phase is the prevention of dynamic inversion methods to nonlinear control design. To solve this problem, we make research on the relationship between nonminimum phase and backstepping control, finding that a stable nonlinear controller can be obtained by changing the control loop on the basis of backstepping control. By extending the control loop to cover the internal dynamics in it, the internal states are directly controlled by the inputs and simultaneously serve as virtual control for the external states, making it possible to guarantee output tracking as well as internal stability. Then, based on the extended control loop, a simplified control-oriented model is developed to enable the applicability of adaptive backstepping method. It simplifies the design process and releases some limitations caused by direct use of the no simplified control-oriented model. Next, under proper assumptions, asymptotic stability is proved for constant commands, while bounded stability is proved for varying commands. The proposed method is compared with approximate backstepping control and dynamic surface control and is shown to have superior tracking accuracy as well as robustness from the simulation results. This paper may also provide a beneficial guidance for control design of other complex systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad
2015-01-01
Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714
Biped Robot Gait Planning Based on 3D Linear Inverted Pendulum Model
NASA Astrophysics Data System (ADS)
Yu, Guochen; Zhang, Jiapeng; Bo, Wu
2018-01-01
In order to optimize the biped robot’s gait, the biped robot’s walking motion is simplify to the 3D linear inverted pendulum motion mode. The Center of Mass (CoM) locus is determined from the relationship between CoM and the Zero Moment Point (ZMP) locus. The ZMP locus is planned in advance. Then, the forward gait and lateral gait are simplified as connecting rod structure. Swing leg trajectory using B-spline interpolation. And the stability of the walking process is discussed in conjunction with the ZMP equation. Finally the system simulation is carried out under the given conditions to verify the validity of the proposed planning method.
77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-19
... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition threshold. 13.005 Section 13.005 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES...
Automated Simulation Updates based on Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Ward, David G.
2007-01-01
A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.
Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness
NASA Technical Reports Server (NTRS)
Li, Jian; Obrien, T. Kevin
1995-01-01
Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.
Detection of grapes in natural environment using HOG features in low resolution images
NASA Astrophysics Data System (ADS)
Škrabánek, Pavel; Majerík, Filip
2017-07-01
Detection of grapes in real-life images has importance in various viticulture applications. A grape detector based on an SVM classifier, in combination with a HOG descriptor, has proven to be very efficient in detection of white varieties in high-resolution images. Nevertheless, the high time complexity of such utilization was not suitable for its real-time applications, even when a detector of a simplified structure was used. Thus, we examined possibilities of the simplified version application on images of lower resolutions. For this purpose, we designed a method aimed at search for a detector’s setting which gives the best time complexity vs. performance ratio. In order to provide precise evaluation results, we formed new extended datasets. We discovered that even applied on low-resolution images, the simplified detector, with an appropriate setting of all tuneable parameters, was competitive with other state of the art solutions. We concluded that the detector is qualified for real-time detection of grapes in real-life images.
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Chen, Bai-Qiao; Guedes Soares, C.
2018-03-01
The present work investigates the compressive axial ultimate strength of fillet-welded steel-plated ship structures subjected to uniaxial compression, in which the residual stresses in the welded plates are calculated by a thermo-elasto-plastic finite element analysis that is used to fit an idealized model of residual stress distribution. The numerical results of ultimate strength based on the simplified model of residual stress show good agreement with those of various methods including the International Association of Classification Societies (IACS) Common Structural Rules (CSR), leading to the conclusion that the simplified model can be effectively used to represent the distribution of residual stresses in steel-plated structures in a wide range of engineering applications. It is concluded that the widths of the tension zones in the welded plates have a quasi-linear behavior with respect to the plate slenderness. The effect of residual stress on the axial strength of the stiffened plate is analyzed and discussed.
Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques
ERIC Educational Resources Information Center
Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili
2009-01-01
In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…
Using Program Theory-Driven Evaluation Science to Crack the Da Vinci Code
ERIC Educational Resources Information Center
Donaldson, Stewart I.
2005-01-01
Program theory-driven evaluation science uses substantive knowledge, as opposed to method proclivities, to guide program evaluations. It aspires to update, clarify, simplify, and make more accessible the evolving theory of evaluation practice commonly referred to as theory-driven or theory-based evaluation. The evaluator in this chapter provides a…
NASA Technical Reports Server (NTRS)
Burhans, R. W.
1974-01-01
The details are presented of methods for providing OMEGA navigational information including the receiver problem at the antenna and informational display and housekeeping systems based on some 4 bit data processing concepts. Topics discussed include the problem of limiters, zero crossing detectors, signal envelopes, internal timing circuits, phase counters, lane position displays, signal integrators, and software mapping problems.
12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...
12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...
Comparison of simplified models in the prediction of two phase flow in pipelines
NASA Astrophysics Data System (ADS)
Jerez-Carrizales, M.; Jaramillo, J. E.; Fuentes, D.
2014-06-01
Prediction of two phase flow in pipelines is a common task in engineering. It is a complex phenomenon and many models have been developed to find an approximate solution to the problem. Some old models, such as the Hagedorn & Brown (HB) model, have been highlighted by many authors to give very good performance. Furthermore, many modifications have been applied to this method to improve its predictions. In this work two simplified models which are based on empiricism (HB and Mukherjee and Brill, MB) are considered. One mechanistic model which is based on the physics of the phenomenon (AN) and it still needs some correlations called closure relations is also used. Moreover, a drift flux model defined in steady state that is flow pattern dependent (HK model) is implemented. The implementation of these methods was tested using published data in the scientific literature for vertical upward flows. Furthermore, a comparison of the predictive performance of the four models is done against a well from Campo Escuela Colorado. Difference among four models is smaller than difference with experimental data from the well in Campo Escuela Colorado.
Motion video analysis using planar parallax
NASA Astrophysics Data System (ADS)
Sawhney, Harpreet S.
1994-04-01
Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.
SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps
NASA Astrophysics Data System (ADS)
Xu, Xiwei; Zhang, Changhai
2013-12-01
Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.
NASA Technical Reports Server (NTRS)
York, P.; Labell, R. W.
1980-01-01
An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.
8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less
A practical method of predicting the loudness of complex electrical stimuli
NASA Astrophysics Data System (ADS)
McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.
2003-04-01
The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Simplified predictive models for CO 2 sequestration performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared
CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessmentmore » modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.« less
Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan
2017-01-01
Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P < 0.001). Compared with f , SPF was more correlated with DCE MR imaging-derived K trans ( ρ = 0.607; P < 0.001) and v p ( ρ = 0.397; P = 0.004). Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC 0,1000 ( P = 0.004). By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94.7%, respectively, at the optimal cut-off value of 19.26%. Conclusion : The simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
Research on simplified parametric finite element model of automobile frontal crash
NASA Astrophysics Data System (ADS)
Wu, Linan; Zhang, Xin; Yang, Changhai
2018-05-01
The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.
A simplified parsimonious higher order multivariate Markov chain model
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Leo, Michael C; McMullen, Carmit; Wilfond, Benjamin S; Lynch, Frances L; Reiss, Jacob A; Gilmore, Marian J; Himes, Patricia; Kauffman, Tia L; Davis, James V; Jarvik, Gail P; Berg, Jonathan S; Harding, Cary; Kennedy, Kathleen A; Simpson, Dana Kostiner; Quigley, Denise I; Richards, C Sue; Rope, Alan F; Goddard, Katrina A B
2016-03-01
Advances in genome sequencing and gene discovery have created opportunities to efficiently assess more genetic conditions than ever before. Given the large number of conditions that can be screened, the implementation of expanded carrier screening using genome sequencing will require practical methods of simplifying decisions about the conditions for which patients want to be screened. One method to simplify decision making is to generate a taxonomy based on expert judgment. However, expert perceptions of condition attributes used to classify these conditions may differ from those used by patients. To understand whether expert and patient perceptions differ, we asked women who had received preconception genetic carrier screening in the last 3 years to fill out a survey to rate the attributes (predictability, controllability, visibility, and severity) of several autosomal recessive or X-linked genetic conditions. These conditions were classified into one of five taxonomy categories developed by subject experts (significantly shortened lifespan, serious medical problems, mild medical problems, unpredictable medical outcomes, and adult-onset conditions). A total of 193 women provided 739 usable ratings across 20 conditions. The mean ratings and correlations demonstrated that participants made distinctions across both attributes and categories. Aggregated mean attribute ratings across categories demonstrated logical consistency between the key features of each attribute and category, although participants perceived little difference between the mild and serious categories. This study provides empirical evidence for the validity of our proposed taxonomy, which will simplify patient decisions for results they would like to receive from preconception carrier screening via genome sequencing. © 2016 Wiley Periodicals, Inc.
Simplified half-life methods for the analysis of kinetic data
NASA Technical Reports Server (NTRS)
Eberhart, J. G.; Levin, E.
1988-01-01
The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.
Computer vision-based method for classification of wheat grains using artificial neural network.
Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim
2017-06-01
A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.
ERIC Educational Resources Information Center
Kim, YoungHwan; Reigluth, Charles M.
The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…
A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro
NASA Technical Reports Server (NTRS)
Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman
1996-01-01
Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-23
...-02] RIN 0694-AE98 Simplified Network Application Processing System, On-Line Registration and Account...'') electronically via BIS's Simplified Network Application Processing (SNAP-R) system. Currently, parties must... Network Applications Processing System (SNAP-R) in October 2006. The SNAP-R system provides a Web based...
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...
ERIC Educational Resources Information Center
Kwak, Meg M.; Ervin, Ruth A.; Anderson, Mary Z.; Austin, John
2004-01-01
As we begin to apply functional assessment procedures in mainstream educational settings, there is a need to explore options for identifying behavior function that are not only effective but efficient and practical for school personnel to employ. Attempts to simplify the functional assessment process are evidenced by the development of informant…
Simplified method to solve sound transmission through structures lined with elastic porous material.
Lee, J H; Kim, J
2001-11-01
An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.
NASA Astrophysics Data System (ADS)
Staszczuk, Anna
2017-03-01
The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.
Airflow and Particle Transport Through Human Airways: A Systematic Review
NASA Astrophysics Data System (ADS)
Kharat, S. B.; Deoghare, A. B.; Pandey, K. M.
2017-08-01
This paper describes review of the relevant literature about two phase analysis of air and particle flow through human airways. An emphasis of the review is placed on elaborating the steps involved in two phase analysis, which are Geometric modelling methods and Mathematical models. The first two parts describes various approaches that are followed for constructing an Airway model upon which analysis are conducted. Broad two categories of geometric modelling viz. Simplified modelling and Accurate modelling using medical scans are discussed briefly. Ease and limitations of simplified models, then examples of CT based models are discussed. In later part of the review different mathematical models implemented by researchers for analysis are briefed. Mathematical models used for Air and Particle phases are elaborated separately.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
NASA Technical Reports Server (NTRS)
Leong, Harrison Monfook
1988-01-01
General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.
Review of Qualitative Approaches for the Construction Industry: Designing a Risk Management Toolbox
Spee, Ton; Gillen, Matt; Lentz, Thomas J.; Garrod, Andrew; Evans, Paul; Swuste, Paul
2011-01-01
Objectives This paper presents the framework and protocol design for a construction industry risk management toolbox. The construction industry needs a comprehensive, systematic approach to assess and control occupational risks. These risks span several professional health and safety disciplines, emphasized by multiple international occupational research agenda projects including: falls, electrocution, noise, silica, welding fumes, and musculoskeletal disorders. Yet, the International Social Security Association says, "whereas progress has been made in safety and health, the construction industry is still a high risk sector." Methods Small- and medium-sized enterprises (SMEs) employ about 80% of the world's construction workers. In recent years a strategy for qualitative occupational risk management, known as Control Banding (CB) has gained international attention as a simplified approach for reducing work-related risks. CB groups hazards into stratified risk 'bands', identifying commensurate controls to reduce the level of risk and promote worker health and safety. We review these qualitative solutions-based approaches and identify strengths and weaknesses toward designing a simplified CB 'toolbox' approach for use by SMEs in construction trades. Results This toolbox design proposal includes international input on multidisciplinary approaches for performing a qualitative risk assessment determining a risk 'band' for a given project. Risk bands are used to identify the appropriate level of training to oversee construction work, leading to commensurate and appropriate control methods to perform the work safely. Conclusion The Construction Toolbox presents a review-generated format to harness multiple solutions-based national programs and publications for controlling construction-related risks with simplified approaches across the occupational safety, health and hygiene professions. PMID:22953194
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
Dobashi, Akira; Goda, Kenichi; Yoshimura, Noboru; Ohya, Tomohiko R; Kato, Masayuki; Sumiyama, Kazuki; Matsushima, Masato; Hirooka, Shinichi; Ikegami, Masahiro; Tajiri, Hisao
2016-01-01
AIM To simplify the diagnostic criteria for superficial esophageal squamous cell carcinoma (SESCC) on Narrow Band Imaging combined with magnifying endoscopy (NBI-ME). METHODS This study was based on the post-hoc analysis of a randomized controlled trial. We performed NBI-ME for 147 patients with present or a history of squamous cell carcinoma in the head and neck, or esophagus between January 2009 and June 2011. Two expert endoscopists detected 89 lesions that were suspicious for SESCC lesions, which had been prospectively evaluated for the following 6 NBI-ME findings in real time: “intervascular background coloration”; “proliferation of intrapapillary capillary loops (IPCL)”; and “dilation”, “tortuosity”, “change in caliber”, and “various shapes (VS)” of IPCLs (i.e., Inoue’s tetrad criteria). The histologic examination of specimens was defined as the gold standard for diagnosis. A stepwise logistic regression analysis was used to identify candidates for the simplified criteria from among the 6 NBI-ME findings for diagnosing SESCCs. We evaluated diagnostic performance of the simplified criteria compared with that of Inoue’s criteria. RESULTS Fifty-four lesions (65%) were histologically diagnosed as SESCCs and the others as low-grade intraepithelial neoplasia or inflammation. In the univariate analysis, proliferation, tortuosity, change in caliber, and VS were significantly associated with SESCC (P < 0.01). The combination of VS and proliferation was statistically extracted from the 6 NBI-ME findings by using the stepwise logistic regression model. We defined the combination of VS and proliferation as simplified dyad criteria for SESCC. The areas under the curve of the simplified dyad criteria and Inoue’s tetrad criteria were 0.70 and 0.73, respectively. No significant difference was shown between them. The sensitivity, specificity, and accuracy of diagnosis for SESCC were 77.8%, 57.1%, 69.7% and 51.9%, 80.0%, 62.9% for the simplified dyad criteria and Inoue’s tetrad criteria, respectively. CONCLUSION The combination of proliferation and VS may serve as simplified criteria for the diagnosis of SESCC using NBI-ME. PMID:27895406
Maupin, Molly A.; Senay, Gabriel B.; Kenny, Joan F.; Savoca, Mark E.
2012-01-01
Recent advances in remote-sensing technology and Simplified Surface Energy Balance (SSEB) methods can provide accurate and repeatable estimates of evapotranspiration (ET) when used with satellite observations of irrigated lands. Estimates of ET are generally considered equivalent to consumptive use (CU) because they represent the part of applied irrigation water that is evaporated, transpired, or otherwise not available for immediate reuse. The U.S. Geological Survey compared ET estimates from SSEB methods to CU data collected for 1995 using indirect methods as part of the National Water Use Information Program (NWUIP). Ten-year (2000-2009) average ET estimates from SSEB methods were derived using Moderate Resolution Imaging Spectroradiometer (MODIS) 1-kilometer satellite land surface temperature and gridded weather datasets from the Global Data Assimilation System (GDAS). County-level CU estimates for 1995 were assembled and referenced to 1-kilometer grid cells to synchronize with the SSEB ET estimates. Both datasets were seasonally and spatially weighted to represent the irrigation season (June-September) and those lands that were identified in the county as irrigated. A strong relation (R2 greater than 0.7) was determined between NWUIP CU and SSEB ET data. Regionally, the relation is stronger in arid western states than in humid eastern states, and positive and negative biases are both present at state-level comparisons. SSEB ET estimates can play a major role in monitoring and updating county-based CU estimates by providing a quick and cost-effective method to detect major year-to-year changes at county levels, as well as providing a means to disaggregate county-based ET estimates to sub-county levels. More research is needed to identify the causes for differences in state-based relations.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1984-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1985-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
A model for the sustainable selection of building envelope assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huedo, Patricia, E-mail: huedo@uji.es; Mulet, Elena, E-mail: emulet@uji.es; López-Mesa, Belinda, E-mail: belinda@unizar.es
2016-02-15
The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate themore » impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.« less
Innovative design method of automobile profile based on Fourier descriptor
NASA Astrophysics Data System (ADS)
Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei
2017-10-01
Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge
NASA Astrophysics Data System (ADS)
Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam
2018-04-01
The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).
Measurement of luminance and color uniformity of displays using the large-format scanner
NASA Astrophysics Data System (ADS)
Mazikowski, Adam
2017-08-01
Uniformity of display luminance and color is important for comfort and good perception of the information presented on the display. Although display technology has developed and improved a lot over the past years, different types of displays still present a challenge in selected applications, e.g. in medical use or in case of multi-screen installations. A simplified 9-point method of determining uniformity does not always produce satisfactory results, so a different solution is proposed in the paper. The developed system consists of the large-format X-Y-Z ISEL scanner (isel Germany AG), Konica Minolta high sensitivity spot photometer-colorimeter (e.g. CS-200, Konica Minolta, Inc.) and PC computer. Dedicated software in LabView environment for control of the scanner, transfer the measured data to the computer, and visualization of measurement results was also prepared. Based on the developed setup measurements of plasma display and LCD-LED display were performed. A heavily wornout plasma TV unit, with several artifacts visible was selected. These tests show the advantages and drawbacks of described scanning method with comparison with 9-point simplified uniformity determining method.
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi
2017-11-21
As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
Photographic and drafting techniques simplify method of producing engineering drawings
NASA Technical Reports Server (NTRS)
Provisor, H.
1968-01-01
Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
Moitessier, N; Englebienne, P; Lee, D; Lawandi, J; Corbeil, C R
2008-01-01
Accelerating the drug discovery process requires predictive computational protocols capable of reducing or simplifying the synthetic and/or combinatorial challenge. Docking-based virtual screening methods have been developed and successfully applied to a number of pharmaceutical targets. In this review, we first present the current status of docking and scoring methods, with exhaustive lists of these. We next discuss reported comparative studies, outlining criteria for their interpretation. In the final section, we describe some of the remaining developments that would potentially lead to a universally applicable docking/scoring method. PMID:18037925
A simplified computer solution for the flexibility matrix of contacting teeth for spiral bevel gears
NASA Technical Reports Server (NTRS)
Hsu, C. Y.; Cheng, H. S.
1987-01-01
A computer code, FLEXM, was developed to calculate the flexibility matrices of contacting teeth for spiral bevel gears using a simplified analysis based on the elementary beam theory for the deformation of gear and shaft. The simplified theory requires a computer time at least one order of magnitude less than that needed for the complete finite element method analysis reported earlier by H. Chao, and it is much easier to apply for different gear and shaft geometries. Results were obtained for a set of spiral bevel gears. The teeth deflections due to torsion, bending moment, shearing strain and axial force were found to be in the order 10(-5), 10(-6), 10(-7), and 10(-8) respectively. Thus, the torsional deformation was the most predominant factor. In the analysis of dynamic load, response frequencies were found to be larger when the mass or moment of inertia was smaller or the stiffness was larger. The change in damping coefficient had little influence on the resonance frequency, but has a marked influence on the dynamic load at the resonant frequencies.
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.
Kedzierski, Michal; Delis, Paulina
2016-06-23
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.
Liu, Anlin; Li, Xingmin; He, Yanbo; Deng, Fengdong
2004-02-01
Based on the principle of energy balance, the method for calculating latent evaporation was simplified, and hence, the construction of the drought remote sensing monitoring model of crop water shortage index was also simplified. Since the modified model involved fewer parameters and reduced computing times, it was more suitable for the operation running in the routine services. After collecting the concerned meteorological elements and the NOAA/AVHRR image data, the new model was applied to monitor the spring drought in Guanzhong, Shanxi Province. The results showed that the monitoring results from the new model, which also took more considerations of the effects of the ground coverage conditions and meteorological elements such as wind speed and the water pressure, were much better than the results from the model of vegetation water supply index. From the view of the computing times, service effects and monitoring results, the simplified crop water shortage index model was more suitable for practical use. In addition, the reasons of the abnormal results of CWSI > 1 in some regions in the case studies were also discussed in this paper.
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization
Kedzierski, Michal; Delis, Paulina
2016-01-01
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954
Mining dynamic noteworthy functions in software execution sequences.
Zhang, Bing; Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong
2017-01-01
As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely.
SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices
NASA Astrophysics Data System (ADS)
Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2017-08-01
Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.
NASA Astrophysics Data System (ADS)
Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.
2018-04-01
Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods based on total traffic counts and road density distribution suggested that the use of total traffic counts in a simplified form could enhance higher uncertainties in the spatial disaggregation of emissions. Results obtained could add new information that help to improve the air pollution management system in the city and contribute to local public policy decisions. Additionally, this work provides appropriate resolution emission fluxes for ongoing research in atmospheric modeling in the city, with the aim to improve the understanding of transport, transformation and impacts of pollutant emissions in urban air quality.
Evolutionary image simplification for lung nodule classification with convolutional neural networks.
Lückehe, Daniel; von Voigt, Gabriele
2018-05-29
Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Configurable memory system and method for providing atomic counting operations in a memory device
Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin
2010-09-14
A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.
Observer-based monitoring of heat exchangers.
Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente
2008-01-01
The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.
Schmitz, Guy; Kolar-Anić, Ljiljana Z; Anić, Slobodan R; Cupić, Zeljko D
2008-12-25
The stoichiometric network analysis (SNA) introduced by B. L. Clarke is applied to a simplified model of the complex oscillating Bray-Liebhafsky reaction under batch conditions, which was not examined by this method earlier. This powerful method for the analysis of steady-states stability is also used to transform the classical differential equations into dimensionless equations. This transformation is easy and leads to a form of the equations combining the advantages of classical dimensionless equations with the advantages of the SNA. The used dimensionless parameters have orders of magnitude given by the experimental information about concentrations and currents. This simplifies greatly the study of the slow manifold and shows which parameters are essential for controlling its shape and consequently have an important influence on the trajectories. The effectiveness of these equations is illustrated on two examples: the study of the bifurcations points and a simple sensitivity analysis, different from the classical one, more based on the chemistry of the studied system.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
NASA Technical Reports Server (NTRS)
Taylor, R. E.; Sennott, J. W. (Inventor)
1984-01-01
In a global positioning system (GPS), such as the NAVSTAR/GPS system, wherein the position coordinates of user terminals are obtained by processing multiple signals transmitted by a constellation of orbiting satellites, an acquisition-aiding signal generated by an earth-based control station is relayed to user terminals via a geostationary satellite to simplify user equipment. The aiding signal is FSK modulated on a reference channel slightly offset from the standard GPS channel. The aiding signal identifies satellites in view having best geometry and includes Doppler prediction data as well as GPS satellite coordinates and identification data associated with user terminals within an area being served by the control station and relay satellite. The aiding signal significantly reduces user equipment by simplifying spread spectrum signal demodulation and reducing data processing functions previously carried out at the user terminals.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
NASA Astrophysics Data System (ADS)
Tveito, Knut Omdal; Pakanati, Akash; M'Hamdi, Mohammed; Combeau, Hervé; Založnik, Miha
2018-04-01
Macrosegregation is a result of the interplay of various transport mechanisms, including natural convection, solidification shrinkage, and grain motion. Experimental observations also indicate the impact of grain morphology, ranging from dendritic to globular, on macrosegregation formation. To avoid the complexity arising due to modeling of an equiaxed dendritic grain, we present the development of a simplified three-phase, multiscale equiaxed dendritic solidification model based on the volume-averaging method, which accounts for the above-mentioned transport phenomena. The validity of the model is assessed by comparing it with the full three-phase model without simplifications. It is then applied to qualitatively analyze the impact of grain morphology on macrosegregation formation in an industrial scale direct chill cast aluminum alloy ingot.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
48 CFR 713.000 - Scope of part.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...
48 CFR 713.000 - Scope of part.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...
Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.
Cave, N J; Koo, S T
2015-01-01
Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
A topological hierarchy for functions on triangulated surfaces.
Bremer, Peer-Timo; Edelsbrunner, Herbert; Hamann, Bernd; Pascucci, Valerio
2004-01-01
We combine topological and geometric methods to construct a multiresolution representation for a function over a two-dimensional domain. In a preprocessing stage, we create the Morse-Smale complex of the function and progressively simplify its topology by cancelling pairs of critical points. Based on a simple notion of dependency among these cancellations, we construct a hierarchical data structure supporting traversal and reconstruction operations similarly to traditional geometry-based representations. We use this data structure to extract topologically valid approximations that satisfy error bounds provided at runtime.
A case study by life cycle assessment
NASA Astrophysics Data System (ADS)
Li, Shuyun
2017-05-01
This article aims to assess the potential environmental impact of an electrical grinder during its life cycle. The Life Cycle Inventory Analysis was conducted based on the Simplified Life Cycle Assessment (SLCA) Drivers that calculated from the Valuation of Social Cost and Simplified Life Cycle Assessment Model (VSSM). The detailed results for LCI can be found under Appendix II. The Life Cycle Impact Assessment was performed based on Eco-indicator 99 method. The analysis results indicated that the major contributor to the environmental impact as it accounts for over 60% overall SLCA output. In which, 60% of the emission resulted from the logistic required for the maintenance activities. This was measured by conducting the hotspot analysis. After performing sensitivity analysis, it is evidenced that changing fuel type results in significant decrease environmental footprint. The environmental benefit can also be seen from the negative output values of the recycling activities. By conducting Life Cycle Assessment analysis, the potential environmental impact of the electrical grinder was investigated.
NASA Astrophysics Data System (ADS)
Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung
2018-05-01
A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.
Multi-phase CFD modeling of solid sorbent carbon capture system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, E. M.; DeCroix, D.; Breault, R.
2013-07-01
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; DeCroix, David; Breault, Ronald W.
2013-07-30
Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsiamas, P; Czerminska, M; Makrigiorgos, G
2014-06-15
Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.
Reduced-Volume Fracture Toughness Characterization for Transparent Polymers
2015-03-21
Caruthers et al. (2004) developed a thermodynamically consistent, nonlinear viscoelastic bulk constitutive model based on a potential energy clock ( PEC ...except that relaxation times change. Because of its formulation, the PEC model predicts mechanical yield as a natural consequence of relaxation...softening type of behavior, but hysteresis effects are not naturally accounted for. Adolf et al. (2009) developed a method of simplifying the PEC model
Bioassay of Surface Quality/Chesapeake Bay, Maryland
1995-02-01
macroinvertebrate bioassay (Lenat, 1988 ; Resh, 1984). A simplified method, based on macroinvertebrates, has been developed for use by volunteer groups...338. ) Glcason, IHenry A . and A . Cronquist . 1963. ManualofVascular Plants of Northeastern United States S -- ’ and Adjacent Canada. New York Botanical...distribution unlimited The views, opinions and/or findings contained in this report arethose of the author( a ) and should not be construed as an official
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods
NASA Technical Reports Server (NTRS)
Adams, G. F.
1980-01-01
The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
Development of a measure of informed choice suitable for use in low literacy populations.
Dormandy, Elizabeth; Tsui, Elaine Y L; Marteau, Theresa M
2007-06-01
To assess the reliability and validity of a simplified questionnaire-based measure of informed choice in populations with low literacy. The measure comprises (a) knowledge about the test and (b) attitudes towards undergoing the test. Responses to (a) and (b) together with information on test uptake, are used to classify choices as informed or uninformed. A cross-sectional study of 79 pregnant women (46 women with higher, and 33 with lower education levels) completed a simplified questionnaire, a standardised questionnaire and a semi-structured interview about antenatal sickle cell and thalassaemia (SCT) screening. The measures used were: (a) informed choice, based on knowledge about the test, attitudes towards undergoing the test, and uptake of the test and (b) ease of completion measures. The simplified measures of knowledge and attitudes were able to distinguish between women classified according to interview responses as having good or poor knowledge (knowledge scores 6.8 versus 3.2, p<0.001), and positive or negative attitudes towards undergoing the test (attitude scores 20.6 versus 16.2, p=0.023). There was no difference in rates of informed choice derived from the simplified or standardised measures (54% versus 51%, 95% CI difference -11 to 19). Women with lower levels of education found the simplified questionnaire easier to complete than the standardised version (11.0 versus 9.6, p=0.009). Those with higher levels of education found no difference in ease of completion between the two versions of the questionnaire (11.8 versus 11.6, p=0.54). A simplified questionnaire-based measure of informed choice in antenatal SCT screening is as reliable and valid as a more complex standardised version and for those with less education, easier to complete. The simplified questionnaire-based measure of informed choice is suitable for use in populations with low and high levels of education.
NASA Astrophysics Data System (ADS)
Behroozi-Toosi, A. B.; Booker, H. G.
1980-12-01
The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.
A simplified method for extracting androgens from avian egg yolks
Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.
2009-01-01
Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.
Simplified Modeling of Oxidation of Hydrocarbons
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2008-01-01
A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.
Method for providing a compliant cantilevered micromold
Morales, Alfredo M.; Domeier, Linda A.; Gonzales, Marcela G.; Keifer, Patrick N.; Garino, Terry J.
2008-12-16
A compliant cantilevered three-dimensional micromold is provided. The compliant cantilevered micromold is suitable for use in the replication of cantilevered microparts and greatly simplifies the replication of such cantilevered parts. The compliant cantilevered micromold may be used to fabricate microparts using casting or electroforming techniques. When the compliant micromold is used to fabricate electroformed cantilevered parts, the micromold will also comprise an electrically conducting base formed by a porous metal substrate that is embedded within the compliant cantilevered micromold. Methods for fabricating the compliant cantilevered micromold as well as methods of replicating cantilevered microparts using the compliant cantilevered micromold are also provided.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
The financial viability of an SOFC cogeneration system in single-family dwellings
NASA Astrophysics Data System (ADS)
Alanne, Kari; Saari, Arto; Ugursal, V. Ismet; Good, Joel
In the near future, fuel cell-based residential micro-CHP systems will compete with traditional methods of energy supply. A micro-CHP system may be considered viable if its incremental capital cost compared to its competitors equals to cumulated savings during a given period of time. A simplified model is developed in this study to estimate the operation of a residential solid oxide fuel cell (SOFC) system. A comparative assessment of the SOFC system vis-à-vis heating systems based on gas, oil and electricity is conducted using the simplified model for a single-family house located in Ottawa and Vancouver. The energy consumption of the house is estimated using the HOT2000 building simulation program. A financial analysis is carried out to evaluate the sensitivity of the maximum allowable capital cost with respect to system sizing, acceptable payback period, energy price and the electricity buyback strategy of an energy utility. Based on the financial analysis, small (1-2 kW e) SOFC systems seem to be feasible in the considered case. The present study shows also that an SOFC system is especially an alternative to heating systems based on oil and electrical furnaces.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine
NASA Astrophysics Data System (ADS)
Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin
2017-05-01
Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.
A Simplified Method for Implementing Run-Time Polymorphism in Fortran95
Decyk, Viktor K.; Norton, Charles D.
2004-01-01
This paper discusses a simplified technique for software emulation of inheritance and run-time polymorphism in Fortran95. This technique involves retaining the same type throughout an inheritance hierarchy, so that only functions which are modified in a derived class need to be implemented.
PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
Fuel Burn Estimation Using Real Track Data
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
Durand, Jean-Baptiste; Allard, Alix; Guitton, Baptiste; van de Weg, Eric; Bink, Marco C A M; Costes, Evelyne
2017-01-01
Irregular flowering over years is commonly observed in fruit trees. The early prediction of tree behavior is highly desirable in breeding programmes. This study aims at performing such predictions, combining simplified phenotyping and statistics methods. Sequences of vegetative vs. floral annual shoots (AS) were observed along axes in trees belonging to five apple related full-sib families. Sequences were analyzed using Markovian and linear mixed models including year and site effects. Indices of flowering irregularity, periodicity and synchronicity were estimated, at tree and axis scales. They were used to predict tree behavior and detect QTL with a Bayesian pedigree-based analysis, using an integrated genetic map containing 6,849 SNPs. The combination of a Biennial Bearing Index (BBI) with an autoregressive coefficient (γ g ) efficiently predicted and classified the genotype behaviors, despite few misclassifications. Four QTLs common to BBIs and γ g and one for synchronicity were highlighted and revealed the complex genetic architecture of the traits. Irregularity resulted from high AS synchronism, whereas regularity resulted from either asynchronous locally alternating or continual regular AS flowering. A relevant and time-saving method, based on a posteriori sampling of axes and statistical indices is proposed, which is efficient to evaluate the tree breeding values for flowering regularity and could be transferred to other species.
Kumar, P V; Sharma, S K; Rishi, N; Ghosh, D K; Baranwal, V K
Management of viral diseases relies on definite and sensitive detection methods. Citrus yellow mosaic virus (CYMV), a double stranded DNA virus of the genus Badnavirus, causes yellow mosaic disease in citrus plants. CYMV is transmitted through budwood and requires a robust and simplified indexing protocol for budwood certification programme. The present study reports development and standardization of an isothermal based recombinase polymerase amplification (RPA) assay for a sensitive, rapid, easy, and cost-effective method for detection and diagnosis of CYMV. Two different oligonucleotide primer sets were designed from ORF III (coding for polyprotein) and ORF II (coding for virion associated protein) regions of CYMV to perform amplification assays. Comparative evaluation of RPA, PCR and immuno-capture recombinase polymerase amplification (IC-RPA) based assays were done using purified DNA and plant crude sap. CYMV infection was efficiently detected from the crude sap in RPA and IC-RPA assays. The primer set used in RPA was specific and did not show any cross-amplification with banana streak MY virus (BSMYV), another Badnavirus species. The results from the present study indicated that RPA assay can be used easily in routine indexing of citrus planting material. To the best of our knowledge, this is the first report on development of a rapid and simplified isothermal detection assay for CYMV and can be utilized as an effective technique in quarantine and budwood certification process.
Rotor design for maneuver performance
NASA Technical Reports Server (NTRS)
Berry, John D.; Schrage, Daniel
1986-01-01
A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.
Monitoring inter-channel nonlinearity based on differential pilot
NASA Astrophysics Data System (ADS)
Wang, Wanli; Yang, Aiying; Guo, Peng; Lu, Yueming; Qiao, Yaojun
2018-06-01
We modify and simplify the inter-channel nonlinearity (NL) estimation method by using differential pilot. Compared to previous works, the inter-channel NL estimation method we propose has much lower complexity and does not need modification of the transmitter. The performance of inter-channel NL monitoring with different launch power is tested. For both QPSK and 16QAM systems with 9 channels, the estimation error of inter-channel NL is lower than 1 dB when the total launch power is bigger than 12 dBm after 1000 km optical transmission. At last, we compare our inter-channel NL estimation method with other methods.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Czakó, Gábor; Szalay, Viktor; Császár, Attila G
2006-01-07
The currently most efficient finite basis representation (FBR) method [Corey et al., in Numerical Grid Methods and Their Applications to Schrodinger Equation, NATO ASI Series C, edited by C. Cerjan (Kluwer Academic, New York, 1993), Vol. 412, p. 1; Bramley et al., J. Chem. Phys. 100, 6175 (1994)] designed specifically to deal with nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc., employs very special l-independent grids and results in a symmetric FBR. While highly efficient, this method is not general enough. For instance, it cannot deal with nondirect product bases of the above structure efficiently if the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are discrete variable representation (DVR) functions of the infinite type. The optimal-generalized FBR(DVR) method [V. Szalay, J. Chem. Phys. 105, 6940 (1996)] is designed to deal with general, i.e., direct and/or nondirect product, bases and grids. This robust method, however, is too general, and its direct application can result in inefficient computer codes [Czako et al., J. Chem. Phys. 122, 024101 (2005)]. It is shown here how the optimal-generalized FBR method can be simplified in the case of nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc. As a result the commonly used symmetric FBR is recovered and simplified nonsymmetric FBRs utilizing very special l-dependent grids are obtained. The nonsymmetric FBRs are more general than the symmetric FBR in that they can be employed efficiently even when the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are DVR functions of the infinite type. Arithmetic operation counts and a simple numerical example presented show unambiguously that setting up the Hamiltonian matrix requires significantly less computer time when using one of the proposed nonsymmetric FBRs than that in the symmetric FBR. Therefore, application of this nonsymmetric FBR is more efficient than that of the symmetric FBR when one wants to diagonalize the Hamiltonian matrix either by a direct or via a basis-set contraction method. Enormous decrease of computer time can be achieved, with respect to a direct application of the optimal-generalized FBR, by employing one of the simplified nonsymmetric FBRs as is demonstrated in noniterative calculations of the low-lying vibrational energy levels of the H3+ molecular ion. The arithmetic operation counts of the Hamiltonian matrix vector products and the properties of a recently developed diagonalization method [Andreozzi et al., J. Phys. A Math. Gen. 35, L61 (2002)] suggest that the nonsymmetric FBR applied along with this particular diagonalization method is suitable to large scale iterative calculations. Whether or not the nonsymmetric FBR is competitive with the symmetric FBR in large-scale iterative calculations still has to be investigated numerically.
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
NASA Astrophysics Data System (ADS)
Wang, Qi; Dong, Xufeng; Li, Luyu; Ou, Jinping
2018-06-01
As constitutive models are too complicated and existing mechanical models lack universality, these models are beyond satisfaction for magnetorheological elastomer (MRE) devices. In this article, a novel universal method is proposed to build concise mechanical models. Constitutive model and electromagnetic analysis were applied in this method to ensure universality, while a series of derivations and simplifications were carried out to obtain a concise formulation. To illustrate the proposed modeling method, a conical MRE isolator was introduced. Its basic mechanical equations were built based on equilibrium, deformation compatibility, constitutive equations and electromagnetic analysis. An iteration model and a highly efficient differential equation editor based model were then derived to solve the basic mechanical equations. The final simplified mechanical equations were obtained by re-fitting the simulations with a novel optimal algorithm. In the end, verification test of the isolator has proved the accuracy of the derived mechanical model and the modeling method.
Generalized vegetation map of north Merrit Island based on a simplified multispectral analysis
NASA Technical Reports Server (NTRS)
Poonai, P.; Floyd, W. J.; Rahmani, M. A.
1977-01-01
A simplified system for classification of multispectral data was used for making a generalized map of ground features of North Merritt Island. Subclassification of vegetation within broad categories yielded promising results which led to a completely automatic method and to the production of satisfactory detailed maps. Changes in an area north of Happy Hammocks are evidently related to water relations of the soil and are not associated with the last winter freeze-damage which affected mainly the mangrove species, likely to reestablish themselves by natural processes. A supplementary investigation involving reflectance studies in the laboratory has shown that the reflectance by detached citrus leaves, of wavelengths lying between 400 microns and 700 microns, showed some variation over a period of seven days during which the leaves were kept in a laboratory atmosphere.
Algorithm of reducing the false positives in IDS based on correlation Analysis
NASA Astrophysics Data System (ADS)
Liu, Jianyi; Li, Sida; Zhang, Ru
2018-03-01
This paper proposes an algorithm of reducing the false positives in IDS based on correlation Analysis. Firstly, the algorithm analyzes the distinguishing characteristics of false positives and real alarms, and preliminary screen the false positives; then use the method of attribute similarity clustering to the alarms and further reduces the amount of alarms; finally, according to the characteristics of multi-step attack, associated it by the causal relationship. The paper also proposed a reverse causation algorithm based on the attack association method proposed by the predecessors, turning alarm information into a complete attack path. Experiments show that the algorithm simplifies the number of alarms, improve the efficiency of alarm processing, and contribute to attack purposes identification and alarm accuracy improvement.
Psychometric Evaluation of the Simplified Chinese Version of Flourishing Scale
ERIC Educational Resources Information Center
Tang, Xiaoqing; Duan, Wenjie; Wang, Zhizhang; Liu, Tianyuan
2016-01-01
Objectives: The Flourishing Scale (FS) was developed to measure psychological well-being from the eudaimonic perspective, highlighting the flourishing of human functioning. This article evaluated the psychometric characteristics of the simplified Chinese version of FS among a Chinese community population. Method: A total of 433 participants from…
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
NASA Astrophysics Data System (ADS)
Won, Jun Yeon; Ko, Guen Bae; Lee, Jae Sung
2016-10-01
In this paper, we propose a fully time-based multiplexing and readout method that uses the principle of the global positioning system. Time-based multiplexing allows simplifying the multiplexing circuits where the only innate traces that connect the signal pins of the silicon photomultiplier (SiPM) channels to the readout channels are used as the multiplexing circuit. Every SiPM channel is connected to the delay grid that consists of the traces on a printed circuit board, and the inherent transit times from each SiPM channel to the readout channels encode the position information uniquely. Thus, the position of each SiPM can be identified using the time difference of arrival (TDOA) measurements. The proposed multiplexing can also allow simplification of the readout circuit using the time-to-digital converter (TDC) implemented in a field-programmable gate array (FPGA), where the time-over-threshold (ToT) is used to extract the energy information after multiplexing. In order to verify the proposed multiplexing method, we built a positron emission tomography (PET) detector that consisted of an array of 4 × 4 LGSO crystals, each with a dimension of 3 × 3 × 20 mm3, and one- to-one coupled SiPM channels. We first employed the waveform sampler as an initial study, and then replaced the waveform sampler with an FPGA-TDC to further simplify the readout circuits. The 16 crystals were clearly resolved using only the time information obtained from the four readout channels. The coincidence resolving times (CRTs) were 382 and 406 ps FWHM when using the waveform sampler and the FPGA-TDC, respectively. The proposed simple multiplexing and readout methods can be useful for time-of-flight (TOF) PET scanners.
On consistent inter-view synthesis for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Tran, Lam C.; Bal, Can; Pal, Christopher J.; Nguyen, Truong Q.
2012-03-01
In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously. Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time. This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate. An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality. [Figure not available: see fulltext.
Dynamic modeling method for infrared smoke based on enhanced discrete phase model
NASA Astrophysics Data System (ADS)
Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo
2018-03-01
The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.
An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation
NASA Astrophysics Data System (ADS)
He, Fuliang; Guo, Yongcai; Gao, Chao
2017-12-01
Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.
Software Certification for Temporal Properties With Affordable Tool Qualification
NASA Technical Reports Server (NTRS)
Xia, Songtao; DiVito, Benedetto L.
2005-01-01
It has been recognized that a framework based on proof-carrying code (also called semantic-based software certification in its community) could be used as a candidate software certification process for the avionics industry. To meet this goal, tools in the "trust base" of a proof-carrying code system must be qualified by regulatory authorities. A family of semantic-based software certification approaches is described, each different in expressive power, level of automation and trust base. Of particular interest is the so-called abstraction-carrying code, which can certify temporal properties. When a pure abstraction-carrying code method is used in the context of industrial software certification, the fact that the trust base includes a model checker would incur a high qualification cost. This position paper proposes a hybrid of abstraction-based and proof-based certification methods so that the model checker used by a client can be significantly simplified, thereby leading to lower cost in tool qualification.
NASA Astrophysics Data System (ADS)
Nøtthellen, Jacob; Konst, Bente; Abildgaard, Andreas
2014-08-01
Purpose: to present a new and simplified method for pixel-wise determination of the signal-to-noise ratio improvement factor KSNR of an antiscatter grid, when used with a digital imaging system. The method was based on approximations of published formulas. The simplified estimate of K2SNR may be used as a decision tool for whether or not to use an antiscatter grid. Methods: the primary transmission of the grid Tp was determined with and without a phantom present using a pattern of beam stops. The Bucky factor B was measured with and without a phantom present. Hence K2SNR maps were created based on Tp and B. A formula was developed to calculate K2SNR from the measured Bs without using the measured Tp. The formula was applied on two exposures of anthropomorphic phantoms, adult legs and baby chest, and on two homogeneous poly[methyl methacrylate] (PMMA) phantoms, 5 cm and 10 cm thick. The results from anthropomorphic phantoms were compared to those based on the beam stop method. The results for the PMMA-phantoms were compared to a study that used a contrast-detail phantom. Results: 2D maps of K2SNR over the entire adult legs and baby chest phantoms were created. The maps indicate that it is advantageous to use the antiscatter grid for imaging of the adult legs. For baby chest imaging the antiscatter grid is not recommended if only the lung regions are of interest. The K2SNR maps based on the new method correspond to those from the beam stop method, and the K2SNR from the homogenous phantoms arising from two different approaches also agreed well with each other. Conclusion: a method to measure 2D K2SNR associated with grid use in digital radiography system was developed and validated. The proposed method requires four exposures and use of a simple formula. It is fast and provides adequate estimates for K2SNR.
Structural analysis for preliminary design of High Speed Civil Transport (HSCT)
NASA Technical Reports Server (NTRS)
Bhatia, Kumar G.
1992-01-01
In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.
NASA Technical Reports Server (NTRS)
Barranger, John P.
1990-01-01
A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
A simplified model for glass formation
NASA Technical Reports Server (NTRS)
Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.
1979-01-01
A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.
Hong, Jun; Chen, Dongchu; Peng, Zhiqiang; Li, Zulin; Liu, Haibo; Guo, Jian
2018-05-01
A new method for measuring the alternating current (AC) half-wave voltage of a Mach-Zehnder modulator is proposed and verified by experiment in this paper. Based on the opto-electronic self-oscillation technology, the physical relationship between the saturation output power of the oscillating signal and the AC half-wave voltage is revealed, and the value of the AC half-wave voltage is solved by measuring the saturation output power of the oscillating signal. The experimental results show that the measured data of this new method involved are in agreement with a traditional method, and not only an external microwave signal source but also the calibration for different frequency measurements is not needed in our new method. The measuring process is simplified with this new method on the premise of ensuring the accuracy of measurement, and it owns good practical value.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.
Mining dynamic noteworthy functions in software execution sequences
Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong
2017-01-01
As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely. PMID:28278276
Concept for a fast analysis method of the energy dissipation at mechanical joints
NASA Astrophysics Data System (ADS)
Wolf, Alexander; Brosius, Alexander
2017-10-01
When designing hybrid parts and structures one major challenge is the design, production and quality assessment of the joining points. While the polymeric composites themselves have excellent material properties, the necessary joints are often the weak link in assembled structures. This paper presents a method of measuring and analysing the energy dissipation at mechanical joining points of hybrid parts. A simplified model is applied based on the characteristic response to different excitation frequencies and amplitudes. The dissipation from damage is the result of relative moments between joining partners und damaged fibres within the composite, whereas the visco-elastic material behaviour causes the intrinsic dissipation. The ambition is to transfer these research findings to the characterisation of mechanical joints in order to quickly assess the general quality of the joint with this non-destructive testing method. The inherent challenge for realising this method is the correct interpretation of the measured energy dissipation and its attribution to either a bad joining point or intrinsic material properties. In this paper the authors present the concept for energy dissipation measurements at different joining points. By inverse analysis a simplified fast semi-analytical model will be developed that allows for a quick basic quality assessment of a given joining point.
Design and optimization of a modal- independent linear ultrasonic motor.
Zhou, Shengli; Yao, Zhiyuan
2014-03-01
To simplify the design of the linear ultrasonic motor (LUSM) and improve its output performance, a method of modal decoupling for LUSMs is proposed in this paper. The specific embodiment of this method is decoupling of the traditional LUSM stator's complex vibration into two simple vibrations, with each vibration implemented by one vibrator. Because the two vibrators are designed independently, their frequencies can be tuned independently and frequency consistency is easy to achieve. Thus, the method can simplify the design of the LUSM. Based on this method, a prototype modal- independent LUSM is designed and fabricated. The motor reaches its maximum thrust force of 47 N, maximum unloaded speed of 0.43 m/s, and maximum power of 7.85 W at applied voltage of 200 Vpp. The motor's structure is then optimized by controlling the difference between the two vibrators' resonance frequencies to reach larger output speed, thrust, and power. The optimized results show that when the frequency difference is 73 Hz, the output force, speed, and power reach their maximum values. At the input voltage of 200 Vpp, the motor reaches its maximum thrust force of 64.2 N, maximum unloaded speed of 0.76 m/s, maximum power of 17.4 W, maximum thrust-weight ratio of 23.7, and maximum efficiency of 39.6%.
Modal kinematics for multisection continuum arms.
Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G
2015-05-13
This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.
Kang, Dong Young; Kim, Won-Suk; Heo, In Sook; Park, Young Hun; Lee, Seungho
2010-11-01
Hyaluronic acid (HA) was extracted in a relatively large scale from rooster comb using a method similar to that reported previously. The extraction method was modified to simplify and to reduce time and cost in order to accommodate a large-scale extraction. Five hundred grams of frozen rooster combs yielded about 500 mg of dried HA. Extracted HA was characterized using asymmetrical flow field-flow fractionation (AsFlFFF) coupled online to a multiangle light scattering detector and a refractive index detector to determine the molecular size, molecular weight (MW) distribution, and molecular conformation of HA. For characterization of HA, AsFlFFF was operated by a simplified two-step procedure, instead of the conventional three-step procedure, where the first two steps (sample loading and focusing) were combined into one to avoid the adsorption of viscous HA onto the channel membrane. The simplified two-step AsFlFFF yielded reasonably good separations of HA molecules based on their MWs. The weight average MW (M(w) ) and the average root-mean-square (RMS) radius of HA extracted from rooster comb were 1.20×10(6) and 94.7 nm, respectively. When the sample solution was filtered through a 0.45 μm disposable syringe filter, they were reduced down to 3.8×10(5) and 50.1 nm, respectively. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de; Formisano, Antonio, E-mail: antoform@unina.it
2015-12-31
Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local responsemore » of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.« less
Design of Linear Control System for Wind Turbine Blade Fatigue Testing
NASA Astrophysics Data System (ADS)
Toft, Anders; Roe-Poulsen, Bjarke; Christiansen, Rasmus; Knudsen, Torben
2016-09-01
This paper proposes a linear method for wind turbine blade fatigue testing at Siemens Wind Power. The setup consists of a blade, an actuator (motor and load mass) that acts on the blade with a sinusoidal moment, and a distribution of strain gauges to measure the blade flexure. Based on the frequency of the sinusoidal input, the blade will start oscillating with a given gain, hence the objective of the fatigue test is to make the blade oscillate with a controlled amplitude. The system currently in use is based on frequency control, which involves some non-linearities that make the system difficult to control. To make a linear controller, a different approach has been chosen, namely making a controller which is not regulating on the input frequency, but on the input amplitude. A non-linear mechanical model for the blade and the motor has been constructed. This model has been simplified based on the desired output, namely the amplitude of the blade. Furthermore, the model has been linearised to make it suitable for linear analysis and control design methods. The controller is designed based on a simplified and linearised model, and its gain parameter determined using pole placement. The model variants have been simulated in the MATLAB toolbox Simulink, which shows that the controller design based on the simple model performs adequately with the non-linear model. Moreover, the developed controller solves the robustness issue found in the existent solution and also reduces the needed energy for actuation as it always operates at the blade eigenfrequency.
Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie
2017-09-01
The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.
Development of Generation System of Simplified Digital Maps
NASA Astrophysics Data System (ADS)
Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng
In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.
Zero cylinder coordinate system approach to image reconstruction in fan beam ICT
NASA Astrophysics Data System (ADS)
Yan, Yan-Chun; Xian, Wu; Hall, Ernest L.
1992-11-01
The state-of-the-art of the transform algorithms has allowed the newest versions to produce excellent and efficient reconstructed images in most applications, especially in medical CT and industrial CT etc. Based on the Zero Cylinder Coordinate system (ZCC) presented in this paper, a new transform algorithm of image reconstruction in fan beam industrial CT is suggested. It greatly reduces the amount of computation of the backprojection, which requires only two INC instructions to calculate the weighted factor and the subcoordinate. A new backprojector is designed, which simplifies its assembly-line mechanism based on the ZCC method. Finally, a simulation results on microcomputer is given out, which proves this method is effective and practical.
Formation Control for Water-Jet USV Based on Bio-Inspired Method
NASA Astrophysics Data System (ADS)
Fu, Ming-yu; Wang, Duan-song; Wang, Cheng-long
2018-03-01
The formation control problem for underactuated unmanned surface vehicles (USVs) is addressed by a distributed strategy based on virtual leader strategy. The control system takes account of disturbance induced by external environment. With the coordinate transformation, the advantage of the proposed scheme is that the control point can be any point of the ship instead of the center of gravity. By introducing bio-inspired model, the formation control problem is addressed with backstepping method. This avoids complicated computation, simplifies the control law, and smoothes the input signals. The system uniform ultimate boundness is proven by Lyapunov stability theory with Young inequality. Simulation results are presented to verify the effectiveness and robust of the proposed controller.
Qiu, Ling; Guo, Xiuzhi; Zhu, Yan; Shou, Weilin; Gong, Mengchun; Zhang, Lin; Han, Huijuan; Quan, Guoqiang; Xu, Tao; Li, Hang; Li, Xuewang
2013-01-01
To investigate the impact of serum creatinine measurement on the applicability of glomerular filtration rate (GFR) evaluation equations. 99mTc-DTPA plasma clearance rate was used as GFR reference (rGFR) in patients with chronic kidney disease (CKD). Serum creatinine was measureded using enzymatic or picric acid creatinine reagent. The GFR of the patients were estimated using the Cockcroft-Gault equation corrected for body surface area, simplified Modification of Diet in Renal Disease (MDRD) equation, simplified MDRD equation corrected to isotopes dilution mass spectrometry, the CKD epidemiology collaborative research equation, and two Chinese simplified MDRD equations. Significant differences in the eGFR results estimated through enzymatic and picric acid methods were observed for the same evaluation equation. The intraclass correlation coefficient (ICC) of eGFR when the creatinine was measured by the picric acid method was significantly lower than that of the enzymatic method. The assessment accuracy of every equation using the enzymatic method to measure creatinine was significantly higher than that measured by the picric acid method when rGFR was > or = 60 mL/min/1.73m2. A significant difference was demonstrated in the same GFR evaluation equation using the picric acid and enzymatic methods. The enzymatic creatinine method was better than the picric acid method.
Simplified method for the transverse bending analysis of twin celled concrete box girder bridges
NASA Astrophysics Data System (ADS)
Chithra, J.; Nagarajan, Praveen; S, Sajith A.
2018-03-01
Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.
Report on FY17 testing in support of integrated EPP-SMT design methods development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yanli .; Jetter, Robert I.; Sham, T. -L.
The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The purpose of this methodology is to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, thermomechanical tests continued in FY17. Thismore » report presents the recent test results for Type 1 SMT specimens on Alloy 617 with long hold times, pressurization SMT on Alloy 617, and two-bar thermal ratcheting test results on SS316H at the temperature range of 405 °C to 705 °C. Preliminary EPP strain range analysis on the two-bar tests are critically evaluated and compared with the experimental results.« less
Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow
NASA Astrophysics Data System (ADS)
Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min
2018-04-01
The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
Memory functions reveal structural properties of gene regulatory networks
Perez-Carrasco, Ruben
2018-01-01
Gene regulatory networks (GRNs) control cellular function and decision making during tissue development and homeostasis. Mathematical tools based on dynamical systems theory are often used to model these networks, but the size and complexity of these models mean that their behaviour is not always intuitive and the underlying mechanisms can be difficult to decipher. For this reason, methods that simplify and aid exploration of complex networks are necessary. To this end we develop a broadly applicable form of the Zwanzig-Mori projection. By first converting a thermodynamic state ensemble model of gene regulation into mass action reactions we derive a general method that produces a set of time evolution equations for a subset of components of a network. The influence of the rest of the network, the bulk, is captured by memory functions that describe how the subnetwork reacts to its own past state via components in the bulk. These memory functions provide probes of near-steady state dynamics, revealing information not easily accessible otherwise. We illustrate the method on a simple cross-repressive transcriptional motif to show that memory functions not only simplify the analysis of the subnetwork but also have a natural interpretation. We then apply the approach to a GRN from the vertebrate neural tube, a well characterised developmental transcriptional network composed of four interacting transcription factors. The memory functions reveal the function of specific links within the neural tube network and identify features of the regulatory structure that specifically increase the robustness of the network to initial conditions. Taken together, the study provides evidence that Zwanzig-Mori projections offer powerful and effective tools for simplifying and exploring the behaviour of GRNs. PMID:29470492
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
NASA Astrophysics Data System (ADS)
Klein, Andreas; Gerlach, Gerald
1998-09-01
This paper deals with the simulation of the fluid-structure interaction phenomena in micropumps. The proposed solution approach is based on external coupling of two different solvers, which are considered here as `black boxes'. Therefore, no specific intervention is necessary into the program code, and solvers can be exchanged arbitrarily. For the realization of the external iteration loop, two algorithms are considered: the relaxation-based Gauss-Seidel method and the computationally more extensive Newton method. It is demonstrated in terms of a simplified test case, that for rather weak coupling, the Gauss-Seidel method is sufficient. However, by simply changing the considered fluid from air to water, the two physical domains become strongly coupled, and the Gauss-Seidel method fails to converge in this case. The Newton iteration scheme must be used instead.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.
2005-09-01
We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
Simplified method for detecting tritium contamination in plants and soil
Andraski, Brian J.; Sandstrom, M.W.; Michel, R.L.; Radyk, J.C.; Stonestrom, David A.; Johnson, M.J.; Mayers, C.J.
2003-01-01
Cost-effective methods are needed to identify the presence and distribution of tritium near radioactive waste disposal and other contaminated sites. The objectives of this study were to (i) develop a simplified sample preparation method for determining tritium contamination in plants and (ii) determine if plant data could be used as an indicator of soil contamination. The method entailed collection and solar distillation of plant water from foliage, followed by filtration and adsorption of scintillation-interfering constituents on a graphite-based solid phase extraction (SPE) column. The method was evaluated using samples of creosote bush [Larrea tridentata (Sessé & Moc. ex DC.) Coville], an evergreen shrub, near a radioactive disposal area in the Mojave Desert. Laboratory tests showed that a 2-g SPE column was necessary and sufficient for accurate determination of known tritium concentrations in plant water. Comparisons of tritium concentrations in plant water determined with the solar distillation–SPE method and the standard (and more laborious) toluene-extraction method showed no significant difference between methods. Tritium concentrations in plant water and in water vapor of root-zone soil also showed no significant difference between methods. Thus, the solar distillation–SPE method provides a simple and cost-effective way to identify plant and soil contamination. The method is of sufficient accuracy to facilitate collection of plume-scale data and optimize placement of more sophisticated (and costly) monitoring equipment at contaminated sites. Although work to date has focused on one desert plant, the approach may be transferable to other species and environments after site-specific experiments.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC
NASA Astrophysics Data System (ADS)
Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng
2017-04-01
This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
The Seepage Simulation of Single Hole and Composite Gas Drainage Based on LB Method
NASA Astrophysics Data System (ADS)
Chen, Yanhao; Zhong, Qiu; Gong, Zhenzhao
2018-01-01
Gas drainage is the most effective method to prevent and solve coal mine gas power disasters. It is very important to study the seepage flow law of gas in fissure coal gas. The LB method is a simplified computational model based on micro-scale, especially for the study of seepage problem. Based on fracture seepage mathematical model on the basis of single coal gas drainage, using the LB method during coal gas drainage of gas flow numerical simulation, this paper maps the single-hole drainage gas, symmetric slot and asymmetric slot, the different width of the slot combined drainage area gas flow under working condition of gas cloud of gas pressure, flow path diagram and flow velocity vector diagram, and analyses the influence on gas seepage field under various working conditions, and also discusses effective drainage method of the center hole slot on both sides, and preliminary exploration that is related to the combination of gas drainage has been carried on as well.
New method for designing serial resonant power converters
NASA Astrophysics Data System (ADS)
Hinov, Nikolay
2017-12-01
In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.
Probabilistic finite elements for transient analysis in nonlinear continua
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Mani, A.
1985-01-01
The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.
Novel optical interconnect devices and coupling methods applying self-written waveguide technology
NASA Astrophysics Data System (ADS)
Nakama, Kenichi; Mikami, Osamu
2011-05-01
For the use in cost-effective optical interconnection of opt-electronic printed wiring boards (OE-PWBs), we have developed novel optical interconnect devices and coupling methods simplifying board to board optical interconnect. All these are based on the self-written waveguide (SWW) technology by the mask-transfer method with light-curable resin. This method enables fabrication of arrayed M × N optical channels at one shot of UV light. Very precise patterns, as an example, optical rod with diameters of 50μm to 500μm, can be easily fabricated. The length of the fabricated patterns ,, typically up to about 1000μm , can be controlled by a spacer placed between the photomask and the substrate. Using these technologies, several new optical interfaces have been demonstrated. These are a chip VCSEL with an optical output rod and new coupling methods of "plug-in" alignment and "optical socket" based on SWW.
GOMA: functional enrichment analysis tool based on GO modules
Huang, Qiang; Wu, Ling-Yun; Wang, Yong; Zhang, Xiang-Sun
2013-01-01
Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology. A variety of enrichment analysis tools have been developed in recent years, but most output a long list of significantly enriched terms that are often redundant, making it difficult to extract the most meaningful functions. In this paper, we present GOMA, a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules. With this method, we systematically revealed functional GO modules, i.e., groups of functionally similar GO terms, via an optimization model and then ranked them by enrichment scores. Our new method simplifies enrichment analysis results by reducing redundancy, thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results. PMID:23237213
Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
Coniferous canopy BRF simulation based on 3-D realistic scene.
Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing
2011-09-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.
Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene
NASA Technical Reports Server (NTRS)
Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing
2011-01-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.
Multi-Fidelity Framework for Modeling Combustion Instability
2016-07-27
generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor showing...generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor...of Aeronautics and Astronautics and Associate Fellow AIAA. ‡ Professor Emeritus. § Senior Scientist, Rocket Propulsion Division and Senior Member
NASA Astrophysics Data System (ADS)
van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François
2016-07-01
Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.
Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques
2017-11-01
Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Simple room-temperature preparation of high-yield large-area graphene oxide
Huang, NM; Lim, HN; Chia, CH; Yarmo, MA; Muhamad, MR
2011-01-01
Graphene has attracted much attention from researchers due to its interesting mechanical, electrochemical, and electronic properties. It has many potential applications such as polymer filler, sensor, energy conversion, and energy storage devices. Graphene-based nanocomposites are under an intense spotlight amongst researchers. A large amount of graphene is required for preparation of such samples. Lately, graphene-based materials have been the target for fundamental life science investigations. Despite graphene being a much sought-after raw material, the drawbacks in the preparation of graphene are that it is a challenge amongst researchers to produce this material in a scalable quantity and that there is a concern about its safety. Thus, a simple and efficient method for the preparation of graphene oxide (GO) is greatly desired to address these problems. In this work, one-pot chemical oxidation of graphite was carried out at room temperature for the preparation of large-area GO with ~100% conversion. This high-conversion preparation of large-area GO was achieved using a simplified Hummer’s method from large graphite flakes (an average flake size of 500 μm). It was found that a high degree of oxidation of graphite could be realized by stirring graphite in a mixture of acids and potassium permanganate, resulting in GO with large lateral dimension and area, which could reach up to 120 μm and ~8000 μm2, respectively. The simplified Hummer’s method provides a facile approach for the preparation of large-area GO. PMID:22267928
Simplified Identification of mRNA or DNA in Whole Cells
NASA Technical Reports Server (NTRS)
Almeida, Eduardo; Kadambi, Geeta
2007-01-01
A recently invented method of detecting a selected messenger ribonucleic acid (mRNA) or deoxyribonucleic acid (DNA) sequence offers two important advantages over prior such methods: it is simpler and can be implemented by means of compact equipment. The simplification and miniaturization achieved by this invention are such that this method is suitable for use outside laboratories, in field settings in which space and power supplies may be limited. The present method is based partly on hybridization of nucleic acid, which is a powerful technique for detection of specific complementary nucleic acid sequences and is increasingly being used for detection of changes in gene expression in microarrays containing thousands of gene probes.
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Discontinuous Galerkin Methods for NonLinear Differential Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy; Mansour, Nagi (Technical Monitor)
2001-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Wahengbam, Brucelee; Wahengbam, Pragya; Tikku, Aseem Prakash
2014-01-01
This article suggests a simplified technique of orthograde MTA obturation in less accessible canal(s) of posteriors teeth without using costly ultrasonics or specialised carrier. Essentially few finger pluggers, absorbent points and a simple canal projection method were used. The orifice(s) of the elected canal(s) to be obturated with MTA were projected onto the external occlusal surface for easy delivery and predictive instrumentation. The idea was based on 'easy access', 'working one canal with one mix at one time', 'thorough condensation' and 'removal of excess moisture'. In case I, palatal canal of tooth no. 2 with gross apical perforation and suspected VRF was obturated with MTA. And in Case II, tooth no. 19 presented with incomplete furcal fracture extending into the canal was obturated with MTA in all 3 canals unitarily. Dense homogenous MTA obturation was achieved and both cases healed uneventfully.
1993-09-01
Chiras , Daniel D. Environmental Science . Redwood City, California: The Benjamin/Cummings Publishing Company, Inc., 1991. Cummings-Saxton, J., L...Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in...Engineering and Environmental Management Raymond A. Sable, B. Architecture Captain, USAF, R.A. September 1993 Approved for public release; distribution
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1988-01-01
The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.
Novel Discretization Schemes for the Numerical Simulation of Membrane Dynamics
2012-09-13
Experimental data therefore plays a key role in validation. A wide variety of methods for building a simulation that meets the listed require- ments are...Despite the intrinsic nonlinearity of true membranes, simplifying assumptions may be appropriate for some applications. Based on these possible assumptions...particles determines the kinetic energy of 15 the system. Mass lumping at the particles is intrinsic (the consistent mass treat- ment of FEM is not an
Work performed on velocity profiles in a hot jet by simplified RELIEF
NASA Technical Reports Server (NTRS)
Miles, Richard B.; Lempert, Walter R.
1991-01-01
The Raman Excitation + Laser Induced Electronic Fluorescence (RELIEF) velocity measurement method is based on vibrationally tagging oxygen molecules and observing their displacement after a short period of time. Two papers that discuss the use and implementation of the RELIEF technique are presented in this final report. Additionally, the end of the report contains a listing of the personnel involved and the reference documents used in the production of this final report.
Economic method for helical gear flank surface characterisation
NASA Astrophysics Data System (ADS)
Koulin, G.; Reavie, T.; Frazer, R. C.; Shaw, B. A.
2018-03-01
Typically the quality of a gear pair is assessed based on simplified geometric tolerances which do not always correlate with functional performance. In order to identify and quantify functional performance based parameters, further development of the gear measurement approach is required. Methodology for interpolation of the full active helical gear flank surface, from sparse line measurements, is presented. The method seeks to identify the minimum number of line measurements required to sufficiently characterise an active gear flank. In the form ground gear example presented, a single helix and three profile line measurements was considered to be acceptable. The resulting surfaces can be used to simulate the meshing engagement of a gear pair and therefore provide insight into functional performance based parameters. Therefore the assessment of the quality can be based on the predicted performance in the context of an application.
Improvements in soft gelatin capsule sample preparation for USP-based simethicone FTIR analysis.
Hargis, Amy D; Whittall, Linda B
2013-02-23
Due to the absence of a significant chromophore, Simethicone raw material and finished product analysis is achieved using a FTIR-based method that quantifies the polydimethylsiloxane (PDMS) component of the active ingredient. The method can be found in the USP monographs for several dosage forms of Simethicone-containing pharmaceutical products. For soft gelatin capsules, the PDMS assay values determined using the procedure described in the USP method were variable (%RSDs from 2 to 9%) and often lower than expected based on raw material values. After investigation, it was determined that the extraction procedure used for sample preparation was causing loss of material to the container walls due to the hydrophobic nature of PDMS. Evaluation revealed that a simple dissolution of the gelatin capsule fill in toluene provided improved assay results (%RSDs≤0.5%) as well as a simplified and rapid sample preparation. Copyright © 2012 Elsevier B.V. All rights reserved.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.
Telko, Martin J; Hickey, Anthony J
2007-10-01
Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.
Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
NASA Astrophysics Data System (ADS)
Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir
2017-12-01
In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.
Yan, Xuemei; Zhang, Qianying; Feng, Fang
2016-04-01
Da-Huang-Xiao-Shi decoction, consisting of Rheum officinale Baill, Mirabilitum, Phellodendron amurense Rupr. and Gardenia jasminoides Ellis, is a traditional Chinese medicine used for the treatment of jaundice. As described in "Jin Kui Yao Lue", a traditional multistep decoction of Da-Huang-Xiao-Shi decoction was required while simplified one-step decoction was used in recent repsorts. To investigate the chemical difference between the decoctions obtained by the traditional and simplified preparations, a sensitive and reliable approach of high-performance liquid chromatography coupled with diode-array detection and electrospray ionization time-of-flight mass spectrometry was established. As a result, a total of 105 compounds were detected and identified. Analysis of the chromatogram profiles of the two decoctions showed that many compounds in the decoction of simplified preparation had changed obviously compared with those in traditional preparation. The changes of constituents would be bound to cause the differences in the therapeutic effects of the two decoctions. The present study demonstrated that certain preparation methods significantly affect the holistic quality of traditional Chinese medicines and the use of a suitable preparation method is crucial for these medicines to produce special clinical curative effect. This research results elucidated the scientific basis of traditional preparation methods in Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Attitude control of the space construction base: A modular approach
NASA Technical Reports Server (NTRS)
Oconnor, D. A.
1982-01-01
A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.
2018-03-01
of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious
A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism
ERIC Educational Resources Information Center
Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda
2013-01-01
To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in inflammatory…
Analysis of temperature distribution in liquid-cooled turbine blades
NASA Technical Reports Server (NTRS)
Livingood, John N B; Brown, W Byron
1952-01-01
The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.
Lærum, Hallvard; Karlsen, Tom H; Faxvaag, Arild
2004-01-01
Background Most hospitals keep and update their paper-based medical records after introducing an electronic medical record or a hospital information system (HIS). This case report describes a HIS in a hospital where the paper-based medical records are scanned and eliminated. To evaluate the HIS comprehensively, the perspectives of medical secretaries and nurses are described as well as that of physicians. Methods We have used questionnaires and interviews to assess and compare frequency of use of the HIS for essential tasks, task performance and user satisfaction among medical secretaries, nurses and physicians. Results The medical secretaries use the HIS much more than the nurses and the physicians, and they consider that the electronic HIS greatly has simplified their work. The work of nurses and physicians has also become simplified, but they find less satisfaction with the system, particularly with the use of scanned document images. Conclusions Although the basis for reference is limited, the results support the assertion that replacing the paper-based medical record primarily benefits the medical secretaries, and to a lesser degree the nurses and the physicians. The varying results in the different employee groups emphasize the need for a multidisciplinary approach when evaluating a HIS. PMID:15488150
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
NASA Technical Reports Server (NTRS)
Jones, Robert T
1937-01-01
A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Kuniya, Toshikazu; Sano, Hideki
2016-05-10
In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.
The use of a projection method to simplify portal and hepatic vein segmentation in liver anatomy.
Huang, Shaohui; Wang, Boliang; Cheng, Ming; Huang, Xiaoyang; Ju, Ying
2008-12-01
In living donor liver transplantation, the volume of the potential graft must be measured to ensure sufficient liver function after surgery. Couinaud divided the liver into 8 functionally independent segments. However, this method is not simple to perform in 3D space directly. Thus, we propose a rapid method to segment the liver based on the hepatic vessel tree. The most important step of this method is vascular projection. By carefully selecting a projection plane, a 3D point can be fixed in the projection plane. This greatly helps in rapid classification. This method was validated by applying it to a 3D liver depicted on CT images, and the result was in good agreement with Couinaud's classification.
Yager’s ranking method for solving the trapezoidal fuzzy number linear programming
NASA Astrophysics Data System (ADS)
Karyati; Wutsqa, D. U.; Insani, N.
2018-03-01
In the previous research, the authors have studied the fuzzy simplex method for trapezoidal fuzzy number linear programming based on the Maleki’s ranking function. We have found some theories related to the term conditions for the optimum solution of fuzzy simplex method, the fuzzy Big-M method, the fuzzy two-phase method, and the sensitivity analysis. In this research, we study about the fuzzy simplex method based on the other ranking function. It is called Yager's ranking function. In this case, we investigate the optimum term conditions. Based on the result of research, it is found that Yager’s ranking function is not like Maleki’s ranking function. Using the Yager’s function, the simplex method cannot work as well as when using the Maleki’s function. By using the Yager’s function, the value of the subtraction of two equal fuzzy numbers is not equal to zero. This condition makes the optimum table of the fuzzy simplex table is undetected. As a result, the simplified fuzzy simplex table becomes stopped and does not reach the optimum solution.
A Simplified Decision Support Approach for Evaluating Wetlands Ecosystem Services
We will be presenting a simplified approach to evaluating ecosystem services provided by freshwater wetlands restoration. Our approach is based on an existing functional assessment approach developed by Golet and Miller for the State of Rhode Island, and modified by Miller for ap...
Analysis of Different Cost Functions in the Geosect Airspace Partitioning Tool
NASA Technical Reports Server (NTRS)
Wong, Gregory L.
2010-01-01
A new cost function representing air traffic controller workload is implemented in the Geosect airspace partitioning tool. Geosect currently uses a combination of aircraft count and dwell time to select optimal airspace partitions that balance controller workload. This is referred to as the aircraft count/dwell time hybrid cost function. The new cost function is based on Simplified Dynamic Density, a measure of different aspects of air traffic controller workload. Three sectorizations are compared. These are the current sectorization, Geosect's sectorization based on the aircraft count/dwell time hybrid cost function, and Geosect s sectorization based on the Simplified Dynamic Density cost function. Each sectorization is evaluated for maximum and average workload along with workload balance using the Simplified Dynamic Density as the workload measure. In addition, the Airspace Concept Evaluation System, a nationwide air traffic simulator, is used to determine the capacity and delay incurred by each sectorization. The sectorization resulting from the Simplified Dynamic Density cost function had a lower maximum workload measure than the other sectorizations, and the sectorization based on the combination of aircraft count and dwell time did a better job of balancing workload and balancing capacity. However, the current sectorization had the lowest average workload, highest sector capacity, and the least system delay.
NASA Astrophysics Data System (ADS)
Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor
2017-04-01
Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located about 5 km from the southern boundary of Budapest. The quake caused serious damages in the epicentral area and in the southern districts of the capital. The epicentral area of the earthquake is located along the Danube River. Sand boils were observed in some locations that indicated the occurrence of liquefaction. Because their exact locations were recorded at the time of the earthquake, in situ geotechnical measurements (CPT and SPT) could be performed at two (Dunaharaszti and Taksony) sites. The different types of measurements enabled the probabilistic liquefaction hazard computations at the two studied sites. We have compared the return periods of liquefaction that were computed using different built-in simplified stress based methods.
Sensor placement in nuclear reactors based on the generalized empirical interpolation method
NASA Astrophysics Data System (ADS)
Argaud, J.-P.; Bouriquet, B.; de Caso, F.; Gong, H.; Maday, Y.; Mula, O.
2018-06-01
In this paper, we apply the so-called generalized empirical interpolation method (GEIM) to address the problem of sensor placement in nuclear reactors. This task is challenging due to the accumulation of a number of difficulties like the complexity of the underlying physics and the constraints in the admissible sensor locations and their number. As a result, the placement, still today, strongly relies on the know-how and experience of engineers from different areas of expertise. The present methodology contributes to making this process become more systematic and, in turn, simplify and accelerate the procedure.
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation.
Huang, Yong; Tao, Gang
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
Compliant cantilevered micromold
Morales, Alfredo Martin [Pleasanton, CA; Domeier, Linda A [Danville, CA; Gonzales, Marcela G [Seattle, WA; Keifer, Patrick N [Livermore, CA; Garino, Terry Joseph [Albuquerque, NM
2006-08-15
A compliant cantilevered three-dimensional micromold is provided. The compliant cantilevered micromold is suitable for use in the replication of cantilevered microparts and greatly simplifies the replication of such cantilevered parts. The compliant cantilevered micromold may be used to fabricate microparts using casting or electroforming techniques. When the compliant micromold is used to fabricate electroformed cantilevered parts, the micromold will also comprise an electrically conducting base formed by a porous metal substrate that is embedded within the compliant cantilevered micromold. Methods for fabricating the compliant cantilevered micromold as well as methods of replicating cantilevered microparts using the compliant cantilevered micromold are also provided.
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yong, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn; Tao, Gang, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
Analytical research on impacting load of aircraft crashing upon moveable concrete target
NASA Astrophysics Data System (ADS)
Zhu, Tong; Ou, Zhuocheng; Duan, Zhuoping; Huang, Fenglei
2018-03-01
The impact load of an aircraft impact upon moveable concrete target was analyzed in this paper by both theoretical and numerical methods. The aircraft was simplified as a one dimensional pole and stress-wave theory was used to deduce the new formula. Furthermore, aiming to compare with previous experimental data, a numerical calculation based on the new formula had been carried out which showed good agreement with the experimental data. The approach, a new formula with particular numerical method, can predict not only the impact load but also the deviation between moveable and static concrete target.
Three-dimensional wax patterning of paper fluidic devices.
Renault, Christophe; Koehne, Jessica; Ricco, Antonio J; Crooks, Richard M
2014-06-17
In this paper we describe a method for three-dimensional wax patterning of microfluidic paper-based analytical devices (μPADs). The method is rooted in the fundamental details of wax transport in paper and provides a simple way to fabricate complex channel architectures such as hemichannels and fully enclosed channels. We show that three-dimensional μPADs can be fabricated with half as much paper by using hemichannels rather than ordinary open channels. We also provide evidence that fully enclosed channels are efficiently isolated from the exterior environment, decreasing contamination risks, simplifying the handling of the device, and slowing evaporation of solvents.
Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul
2016-01-01
Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.
Chandan, Sanjay; Halli, Rajshekhar; Joshi, Samir; Chhabaria, Gaurav; Setiya, Sneha
2013-11-01
Management of pediatric mandibular fractures presents a unique challenge to surgeons in terms of its numerous variations compared to adults. Both conservative and open methods have been advocated with their obvious limitations and complications. However, conservative modalities may not be possible in grossly displaced fractures, which necessitate the open method of fixation. We present a novel and simplified technique of transosseous fixation of displaced pediatric mandibular fractures with polyglactin resorbable suture, which provides adequate stability without any interference with tooth buds and which is easy to master.
Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
Simplified solution for point contact deformation between two elastic solids
NASA Technical Reports Server (NTRS)
Brewe, D. E.; Hamrock, B. J.
1976-01-01
A linear-regression by the method of least squares is made on the geometric variables that occur in the equation for point contact deformation. The ellipticity and the complete eliptic integrals of the first and second kind are expressed as a function of the x, y-plane principal radii. The ellipticity was varied from 1 (circular contact) to 10 (a configuration approaching line contact). These simplified equations enable one to calculate easily the point-contact deformation to within 3 percent without resorting to charts or numerical methods.
Simplified form of tinnitus retraining therapy in adults: a retrospective study
Aazh, Hashir; Moore, Brian CJ; Glasberg, Brian R
2008-01-01
Background Since the first description of tinnitus retraining therapy (TRT), clinicians have modified and customised the method of TRT in order to suit their practice and their patients. A simplified form of TRT is used at Ealing Primary Care Trust Audiology Department. Simplified TRT is different from TRT in the type and (shorter) duration of the counseling but is similar to TRT in the application of sound therapy except for patients exhibiting tinnitus with no hearing loss and no decreased sound tolerance (wearable sound generators were not mandatory or recommended here, whereas they are for TRT). The main goal of this retrospective study was to assess the efficacy of simplified TRT. Methods Data were collected from a series of 42 consecutive patients who underwent simplified TRT for a period of 3 to 23 months. Perceived tinnitus handicap was measured by the Tinnitus Handicap Inventory (THI) and perceived tinnitus loudness, annoyance and the effect of tinnitus on life were assessed through the Visual Analog Scale (VAS). Results The mean THI and VAS scores were significantly decreased after 3 to 23 months of treatment. The mean decline of the THI score was 45 (SD = 22) and the difference between pre- and post-treatment scores was statistically significant. The mean decline of the VAS scores was 1.6 (SD = 2.1) for tinnitus loudness, 3.6 (SD = 2.6) for annoyance, and 3.9 (SD = 2.3) for effect on life. The differences between pre- and post-treatment VAS scores were statistically significant for tinnitus loudness, annoyance, and effect on life. The decline of THI scores was not significantly correlated with age and duration of tinnitus. Conclusion The results suggest that benefit may be obtained from a substantially simplified form of TRT. PMID:18980672
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
A Micromechanics-Based Method for Multiscale Fatigue Prediction
NASA Astrophysics Data System (ADS)
Moore, John Allan
An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
Suitability of analytical methods to measure solubility for the purpose of nanoregulation.
Tantra, Ratna; Bouwmeester, Hans; Bolea, Eduardo; Rey-Castro, Carlos; David, Calin A; Dogné, Jean-Michel; Jarman, John; Laborda, Francisco; Laloy, Julie; Robinson, Kenneth N; Undas, Anna K; van der Zande, Meike
2016-01-01
Solubility is an important physicochemical parameter in nanoregulation. If nanomaterial is completely soluble, then from a risk assessment point of view, its disposal can be treated much in the same way as "ordinary" chemicals, which will simplify testing and characterisation regimes. This review assesses potential techniques for the measurement of nanomaterial solubility and evaluates the performance against a set of analytical criteria (based on satisfying the requirements as governed by the cosmetic regulation as well as the need to quantify the concentration of free (hydrated) ions). Our findings show that no universal method exists. A complementary approach is thus recommended, to comprise an atomic spectrometry-based method in conjunction with an electrochemical (or colorimetric) method. This article shows that although some techniques are more commonly used than others, a huge research gap remains, related with the need to ensure data reliability.
Ha, Ji Won; Hahn, Jong Hoon
2017-02-01
Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zhu, Ning; Sun, Shouguang; Li, Qiang; Zou, Hua
2016-05-01
When a train runs at high speeds, the external exciting frequencies approach the natural frequencies of bogie critical components, thereby inducing strong elastic vibrations. The present international reliability test evaluation standard and design criteria of bogie frames are all based on the quasi-static deformation hypothesis. Structural fatigue damage generated by structural elastic vibrations has not yet been included. In this paper, theoretical research and experimental validation are done on elastic dynamic load spectra on bogie frame of high-speed train. The construction of the load series that correspond to elastic dynamic deformation modes is studied. The simplified form of the load series is obtained. A theory of simplified dynamic load-time histories is then deduced. Measured data from the Beijing-Shanghai Dedicated Passenger Line are introduced to derive the simplified dynamic load-time histories. The simplified dynamic discrete load spectra of bogie frame are established. Based on the damage consistency criterion and a genetic algorithm, damage consistency calibration of the simplified dynamic load spectra is finally performed. The computed result proves that the simplified load series is reasonable. The calibrated damage that corresponds to the elastic dynamic discrete load spectra can cover the actual damage at the operating conditions. The calibrated damage satisfies the safety requirement of damage consistency criterion for bogie frame. This research is helpful for investigating the standardized load spectra of bogie frame of high-speed train.
46 CFR 178.215 - Weight of passengers and crew.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...
46 CFR 178.215 - Weight of passengers and crew.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...
46 CFR 178.215 - Weight of passengers and crew.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...
46 CFR 178.215 - Weight of passengers and crew.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., for which stability information is based on the results of a simplified stability proof test. (b... simplified stability proof test and the number of passengers and crew included in the total test weight... TONS) INTACT STABILITY AND SEAWORTHINESS Stability Instructions for Operating Personnel § 178.215...
Identification of an Efficient Gene Expression Panel for Glioblastoma Classification
Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni
2016-01-01
We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
Refractive index inversion based on Mueller matrix method
NASA Astrophysics Data System (ADS)
Fan, Huaxi; Wu, Wenyuan; Huang, Yanhua; Li, Zhaozhao
2016-03-01
Based on Stokes vector and Jones vector, the correlation between Mueller matrix elements and refractive index was studied with the result simplified, and through Mueller matrix way, the expression of refractive index inversion was deduced. The Mueller matrix elements, under different incident angle, are simulated through the expression of specular reflection so as to analyze the influence of the angle of incidence and refractive index on it, which is verified through the measure of the Mueller matrix elements of polished metal surface. Research shows that, under the condition of specular reflection, the result of Mueller matrix inversion is consistent with the experiment and can be used as an index of refraction of inversion method, and it provides a new way for target detection and recognition technology.
Efficient calculation of the polarizability: a simplified effective-energy technique
NASA Astrophysics Data System (ADS)
Berger, J. A.; Reining, L.; Sottile, F.
2012-09-01
In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.
Toward a Definition of the Engineering Method.
ERIC Educational Resources Information Center
Koen, Billy V.
1988-01-01
Describes a preliminary definition of engineering method as well as a definition and examples of engineering heuristics. After discussing some alternative definitions of the engineering method, a simplified definition of the engineering method is suggested. (YP)
Research Participants’ Understanding of and Reactions to Certificates of Confidentiality
Check, Devon K.; Ammarell, Natalie
2013-01-01
Background Certificates of Confidentiality are intended to facilitate participation in critical public health research by protecting against forced disclosure of identifying data in legal proceedings, but little is known about the effect of Certificate descriptions in consent forms. Methods To gain preliminary insights, we conducted qualitative interviews with 50 HIV-positive individuals in Durham, North Carolina to explore their subjective understanding of Certificate descriptions and whether their reactions differed based on receiving a standard versus simplified description. Results Most interviewees were neither reassured nor alarmed by Certificate information, and most said it would not influence their willingness to participate or provide truthful information. However, compared with those receiving the simplified description, more who read the standard description said it raised new concerns, that their likelihood of participating would be lower, and that they might be less forthcoming. Most interviewees said they found the Certificate description clear, but standard-group participants often found particular words and phrases confusing, while simplified-group participants more often questioned the information’s substance. Conclusions Valid informed consent requires comprehension and voluntariness. Our findings highlight the importance of developing consent descriptions of Certificates and other confidentiality protections that are simple and accurate. These qualitative results provide rich detail to inform a larger, quantitative study that would permit further rigorous comparisons. PMID:24563806
ERIC Educational Resources Information Center
Starbuck, Ethel
The purpose of the study was to determine whether higher shorthand speeds were achieved by high school students in a 1-year shorthand course through the use of Simplified Gregg Shorthand or through the use of Diamond Jubilee (DJ) Gregg Shorthand. The control group consisted of 75 students enrolled in Simplified Shorthand during the years…
Nonstandard and Higher-Order Finite-Difference Methods for Electromagnetics
2009-10-26
Simplified Fuselage filled with 90 passengers. . . . . . . . . 135 4.4. A top view photograph of the expanded polystyrene passenger support, and the... expanded polystyrene supports. . . . . . . . . . . . . . . . . . . . . . . 140 4.10. Measured S11 (the exterior antenna) of the simplified fuselage...escape. To keep the passengers in their designated locations and upright, an expanded polystyrene support system was made. In a sheet of 1” thick
Report on FY15 alloy 617 code rules development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sham, Sam; Jetter, Robert I; Hollinger, Greg
2015-09-01
Due to its strength at very high temperatures, up to 950°C (1742°F), Alloy 617 is the reference construction material for structural components that operate at or near the outlet temperature of the very high temperature gas-cooled reactors. However, the current rules in the ASME Section III, Division 5 Subsection HB, Subpart B for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 650°C (1200°F) (Corum and Brass, Proceedings of ASME 1991 Pressure Vessels and Piping Conference, PVP-Vol. 215, p.147, ASME, NY, 1991). The rationalemore » for this exclusion is that at higher temperatures it is not feasible to decouple plasticity and creep, which is the basis for the current simplified rules. This temperature, 650°C (1200°F), is well below the temperature range of interest for this material for the high temperature gas-cooled reactors and the very high temperature gas-cooled reactors. The only current alternative is, thus, a full inelastic analysis requiring sophisticated material models that have not yet been formulated and verified. To address these issues, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (EPP) analysis methods applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature (Carter, Jetter and Sham, Proceedings of ASME 2012 Pressure Vessels and Piping Conference, papers PVP 2012 28082 and PVP 2012 28083, ASME, NY, 2012), and have been recently revised to incorporate comments and simplify their application. Background documents have been developed for these two code cases to support the ASME Code committee approval process. These background documents for the EPP strain limits and creep-fatigue code cases are documented in this report.« less
2-D transmitral flows simulation by means of the immersed boundary method on unstructured grids
NASA Astrophysics Data System (ADS)
Denaro, F. M.; Sarghini, F.
2002-04-01
Interaction between computational fluid dynamics and clinical researches recently allowed a deeper understanding of the physiology of complex phenomena involving cardio-vascular mechanisms. The aim of this paper is to develop a simplified numerical model based on the Immersed Boundary Method and to perform numerical simulations in order to study the cardiac diastolic phase during which the left ventricle is filled with blood flowing from the atrium throughout the mitral valve. As one of the diagnostic problems to be faced by clinicians is the lack of a univocal definition of the diastolic performance from the velocity measurements obtained by Eco-Doppler techniques, numerical simulations are supposed to provide an insight both into the physics of the diastole and into the interpretation of experimental data. An innovative application of the Immersed Boundary Method on unstructured grids is presented, fulfilling accuracy requirements related to the development of a thin boundary layer along the moving immersed boundary. It appears that this coupling between unstructured meshes and the Immersed Boundary Method is a promising technique when a wide range of spatial scales is involved together with a moving boundary. Numerical simulations are performed in a range of physiological parameters and a qualitative comparison with experimental data is presented, in order to demonstrate that, despite the simplified model, the main physiological characteristics of the diastole are well represented. Copyright
Ackers-Johnson, Matthew; Li, Peter Yiqing; Holmes, Andrew P.; O’Brien, Sian-Marie; Pavlovic, Davor; Foo, Roger S.
2018-01-01
Rationale Cardiovascular disease represents a global pandemic. The advent of and recent advances in mouse genomics, epigenomics, and transgenics offer ever-greater potential for powerful avenues of research. However, progress is often constrained by unique complexities associated with the isolation of viable myocytes from the adult mouse heart. Current protocols rely on retrograde aortic perfusion using specialized Langendorff apparatus, which poses considerable logistical and technical barriers to researchers and demands extensive training investment. Objective To identify and optimize a convenient, alternative approach, allowing the robust isolation and culture of adult mouse cardiac myocytes using only common surgical and laboratory equipment. Methods and Results Cardiac myocytes were isolated with yields comparable to those in published Langendorff-based methods, using direct needle perfusion of the LV ex vivo and without requirement for heparin injection. Isolated myocytes can be cultured antibiotic free, with retained organized contractile and mitochondrial morphology, transcriptional signatures, calcium handling, responses to hypoxia, neurohormonal stimulation, and electric pacing, and are amenable to patch clamp and adenoviral gene transfer techniques. Furthermore, the methodology permits concurrent isolation, separation, and coculture of myocyte and nonmyocyte cardiac populations. Conclusions We present a novel, simplified method, demonstrating concomitant isolation of viable cardiac myocytes and nonmyocytes from the same adult mouse heart. We anticipate that this new approach will expand and accelerate innovative research in the field of cardiac biology. PMID:27502479
NASA Technical Reports Server (NTRS)
Kaukler, W. F.; Frazier, D. O.; Facemire, B.
1984-01-01
Equilibrium temperature-composition diagrams were determined for the two organic systems, succinonitrile-benzene and succinonitrile-cyclohexanol. Measurements were made using the common thermal analysis methods and UV spectrophotometry. Succinonitrile-benzene monotectic was chosen for its low affinity for water and because UV analysis would be simplified. Succinonitrile-cyclohexanol was chosen because both components are transparent models for metallic solidification, as opposed to the other known succinonitrile-based monotectics.
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Teaching neurology to medical students with a simplified version of team-based learning.
Brich, Jochen; Jost, Meike; Brüstle, Peter; Giesler, Marianne; Rijntjes, Michel
2017-08-08
To compare the effect of a simplified version of team-based learning (sTBL), an active learning/small group instructional strategy, with that of the traditionally used small group interactive seminars on the acquisition of knowledge and clinical reasoning (CR) skills. Third- and fourth-year medical students (n = 122) were randomly distributed into 2 groups. A crossover design was used in which 2 neurologic topics were taught by sTBL and 2 by small group interactive seminars. Knowledge was assessed with a multiple-choice question examination (MCQE), CR skills with a key feature problem examination (KFPE). Questionnaires were used for further methodologic evaluation. No group differences were found in the MCQE results. sTBL instruction of the topic "acute altered mental status" was associated with a significantly better student performance in the KFPE ( p = 0.008), with no differences in the other 3 topics covered. Although both teaching methods were highly rated by the students, a clear majority voted for sTBL as their preferred future teaching method. sTBL served as an equivalent alternative to small group interactive seminars for imparting knowledge and teaching CR skills, and was particularly advantageous for teaching CR in the setting of a complex neurologic topic. Furthermore, students reported a strong preference for the sTBL approach, making it a promising tool for effectively teaching neurology. © 2017 American Academy of Neurology.
Ageing airplane repair assessment program for Airbus A300
NASA Technical Reports Server (NTRS)
Gaillardon, J. M.; Schmidt, HANS-J.; Brandecker, B.
1992-01-01
This paper describes the current status of the repair categorization activities and includes all details about the methodologies developed for determination of the inspection program for the skin on pressurized fuselages. For inspection threshold determination two methods are defined based on fatigue life approach, a simplified and detailed method. The detailed method considers 15 different parameters to assess the influences of material, geometry, size location, aircraft usage, and workmanship on the fatigue life of the repair and the original structure. For definition of the inspection intervals a general method is developed which applies to all concerned repairs. For this the initial flaw concept is used by considering 6 parameters and the detectable flaw sizes depending on proposed nondestructive inspection methods. An alternative method is provided for small repairs allowing visual inspection with shorter intervals.
Method to optimize optical switch topology for photonic network-on-chip
NASA Astrophysics Data System (ADS)
Zhou, Ting; Jia, Hao
2018-04-01
In this paper, we propose a method to optimize the optical switch by substituting optical waveguide crossings for optical switching units and an optimizing algorithm to complete the optimization automatically. The functionality of the optical switch remains constant under optimization. With this method, we simplify the topology of optical switch, which means the insertion loss and power consumption of the whole optical switch can be effectively minimized. Simulation result shows that the number of switching units of the optical switch based on Spanke-Benes can be reduced by 16.7%, 20%, 20%, 19% and 17.9% for the scale from 4 × 4 to 8 × 8 respectively. As a proof of concept, the experimental demonstration of an optimized six-port optical switch based on Spanke-Benes structure by means of silicon photonics chip is reported.
NASA Astrophysics Data System (ADS)
Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.
2016-12-01
Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.
Simplified filtered Smith predictor for MIMO processes with multiple time delays.
Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E
2016-11-01
This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Use of Structure as a Basis for Abstraction in Air Traffic Control
NASA Technical Reports Server (NTRS)
Davison, Hayley J.; Hansman, R. John
2004-01-01
The safety and efficiency of the air traffic control domain is highly dependent on the capabilities and limitations of its human controllers. Past research has indicated that structure provided by the airspace and procedures could aid in simplifying the controllers cognitive tasks. In this paper, observations, interviews, voice command data analyses, and radar analyses were conducted at the Boston Terminal Route Control (TRACON) facility to determine if there was evidence of controllers using structure to simplify their cognitive processes. The data suggest that controllers do use structure-based abstractions to simplify their cognitive processes, particularly the projection task. How structure simplifies the projection task and the implications of understanding the benefits structure provides to the projection task was discussed.
Simplified Interface to Complex Memory Hierarchies 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Michael; Ionkov, Latchesar; Williams, Sean
2017-02-21
Memory systems are expected to get evermore complicated in the coming years, and it isn't clear exactly what form that complexity will take. On the software side, a simple, flexible way of identifying and working with memory pools is needed. Additionally, most developers seek code portability and do not want to learn the intricacies of complex memory. Hence, we believe that a library for interacting with complex memory systems should expose two kinds of abstraction: First, a low-level, mechanism-based interface designed for the runtime or advanced user that wants complete control, with its focus on simplified representation but with allmore » decisions left to the caller. Second, a high-level, policy-based interface designed for ease of use for the application developer, in which we aim for best-practice decisions based on application intent. We have developed such a library, called SICM: Simplified Interface to Complex Memory.« less
Simplify Web Development for Faculty and Promote Instructional Design.
ERIC Educational Resources Information Center
Pedersen, David C.
Faculty members are often overwhelmed with the prospect of implementing Web-based instruction. In an effort to simplify the process and incorporate some basic instructional design elements, the Educational Technology Team at Embry Riddle Aeronautical University created a course template for WebCT. Utilizing rapid prototyping, the template…
Low-Density Parity-Check Code Design Techniques to Simplify Encoding
NASA Astrophysics Data System (ADS)
Perez, J. M.; Andrews, K.
2007-11-01
This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.
Students’ errors in solving combinatorics problems observed from the characteristics of RME modeling
NASA Astrophysics Data System (ADS)
Meika, I.; Suryadi, D.; Darhim
2018-01-01
This article was written based on the learning evaluation results of students’ errors in solving combinatorics problems observed from the characteristics of Realistic Mathematics Education (RME); that is modeling. Descriptive method was employed by involving 55 students from two international-based pilot state senior high schools in Banten. The findings of the study suggested that the students still committed errors in simplifying the problem as much 46%; errors in making mathematical model (horizontal mathematization) as much 60%; errors in finishing mathematical model (vertical mathematization) as much 65%; and errors in interpretation as well as validation as much 66%.
Ko, Rachel Jia Min; Lim, Swee Han; Wu, Vivien Xi; Leong, Tak Yam; Liaw, Sok Ying
2018-01-01
INTRODUCTION Simplifying the learning of cardiopulmonary resuscitation (CPR) is advocated to improve skill acquisition and retention. A simplified CPR training programme focusing on continuous chest compression, with a simple landmark tracing technique, was introduced to laypeople. The study aimed to examine the effectiveness of the simplified CPR training in improving lay rescuers’ CPR performance as compared to standard CPR. METHODS A total of 85 laypeople (aged 21–60 years) were recruited and randomly assigned to undertake either a two-hour simplified or standard CPR training session. They were tested two months after the training on a simulated cardiac arrest scenario. Participants’ performance on the sequence of CPR steps was observed and evaluated using a validated CPR algorithm checklist. The quality of chest compression and ventilation was assessed from the recording manikins. RESULTS The simplified CPR group performed significantly better on the CPR algorithm when compared to the standard CPR group (p < 0.01). No significant difference was found between the groups in time taken to initiate CPR. However, a significantly higher number of compressions and proportion of adequate compressions was demonstrated by the simplified group than the standard group (p < 0.01). Hands-off time was significantly shorter in the simplified CPR group than in the standard CPR group (p < 0.001). CONCLUSION Simplifying the learning of CPR by focusing on continuous chest compressions, with simple hand placement for chest compression, could lead to better acquisition and retention of CPR algorithms, and better quality of chest compressions than standard CPR. PMID:29167910
A simplified focusing and astigmatism correction method for a scanning electron microscope
NASA Astrophysics Data System (ADS)
Lu, Yihua; Zhang, Xianmin; Li, Hai
2018-01-01
Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Murthy, A. V.
1987-01-01
A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.
simplified aerosol representations in global modeling
NASA Astrophysics Data System (ADS)
Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip
2015-04-01
The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.
State and force observers based on multibody models and the indirect Kalman filter
NASA Astrophysics Data System (ADS)
Sanjurjo, Emilio; Dopico, Daniel; Luaces, Alberto; Naya, Miguel Ángel
2018-06-01
The aim of this work is to present two new methods to provide state observers by combining multibody simulations with indirect extended Kalman filters. One of the methods presented provides also input force estimation. The observers have been applied to two mechanism with four different sensor configurations, and compared to other multibody-based observers found in the literature to evaluate their behavior, namely, the unscented Kalman filter (UKF), and the indirect extended Kalman filter with simplified Jacobians (errorEKF). The new methods have some more computational cost than the errorEKF, but still much less than the UKF. Regarding their accuracy, both are better than the errorEKF. The method with input force estimation outperforms also the UKF, while the method without force estimation achieves results almost identical to those of the UKF. All the methods have been implemented as a reusable MATLAB® toolkit which has been released as Open Source in https://github.com/MBDS/mbde-matlab.
Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU
NASA Astrophysics Data System (ADS)
Ciarleglio, Constance A.
Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.
Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.
Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I
2007-12-01
A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.
Scaling earthquake ground motions for performance-based assessment of buildings
Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.
2011-01-01
The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.
Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city hasmore » been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.
This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
Tearing-off method based on single carbon nanocoil for liquid surface tension measurement
NASA Astrophysics Data System (ADS)
Wang, Peng; Pan, Lujun; Deng, Chenghao; Li, Chengwei
2016-11-01
A single carbon nanocoil (CNC) is used as a highly sensitive mechanical sensor to measure the surface tension coefficient of deionized water and alcohol in the tearing-off method. The error can be constrained to within 3.8%. Conversely, the elastic spring constant of a CNC can be accurately measured using a liquid, and the error is constrained to within 3.2%. Compared with traditional methods, the CNC is used as a ring and a sensor at the same time, which may simplify the measurement device and reduce error, also all measurements can be performed under a very low liquid dosage owing to the small size of the CNC.
NASA Astrophysics Data System (ADS)
Sánchez, M.; Oldenhof, M.; Freitez, J. A.; Mundim, K. C.; Ruette, F.
A systematic improvement of parametric quantum methods (PQM) is performed by considering: (a) a new application of parameterization procedure to PQMs and (b) novel parametric functionals based on properties of elementary parametric functionals (EPF) [Ruette et al., Int J Quantum Chem 2008, 108, 1831]. Parameterization was carried out by using the simplified generalized simulated annealing (SGSA) method in the CATIVIC program. This code has been parallelized and comparison with MOPAC/2007 (PM6) and MINDO/SR was performed for a set of molecules with C=C, C=H, and H=H bonds. Results showed better accuracy than MINDO/SR and MOPAC-2007 for a selected trial set of molecules.
NASA Astrophysics Data System (ADS)
Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen
2018-02-01
A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.
Herpers, Matthias; Dintsios, Charalabos-Markos
2018-04-25
The decision matrix applied by the Institute for Quality and Efficiency in Health Care (IQWiG) for the quantification of added benefit within the early benefit assessment of new pharmaceuticals in Germany with its nine fields is quite complex and could be simplified. Furthermore, the method used by IQWiG is subject to manifold criticism: (1) it is implicitly weighting endpoints differently in its assessments favoring overall survival and, thereby, drug interventions in fatal diseases, (2) it is assuming that two pivotal trials are available when assessing the dossiers submitted by the pharmaceutical manufacturers, leading to far-reaching implications with respect to the quantification of added benefit, and, (3) it is basing the evaluation primarily on dichotomous endpoints and consequently leading to an information loss of usable evidence. To investigate if criticism is justified and to propose methodological adaptations. Analysis of the available dossiers up to the end of 2016 using statistical tests and multinomial logistic regression and simulations. It was shown that due to power losses, the method does not ensure that results are statistically valid and outcomes of the early benefit assessment may be compromised, though evidence on favoring overall survival remains unclear. Modifications, however, of the IQWiG method are possible to address the identified problems. By converging with the approach of approval authorities for confirmatory endpoints, the decision matrix could be simplified and the analysis method could be improved, to put the results on a more valid statistical basis.
Jiang, Chao; Yuan, Yuan; Yang, Guang; Jin, Yan; Liu, Libing; Zhao, Yuyang; Huang, Luqi
2016-10-21
Inaccurate labeling of materials used in herbal products may compromise the therapeutic efficacy and may pose a threat to medicinal safety. In this paper, a rapid (within 3 h), sensitive and visual colorimetric method for identifying substitutions in terminal market products was developed using cationic conjugated polymer-based fluorescence resonance energy transfer (CCP-based FRET). Chinese medicinal materials with similar morphology and chemical composition were clearly distinguished by the single-nucleotide polymorphism (SNP) genotyping method. Assays using CCP-based FRET technology showed a high frequency of adulterants in Lu-Rong (52.83%) and Chuan-Bei-Mu (67.8%) decoction pieces, and patented Chinese drugs (71.4%, 5/7) containing Chuan-Bei-Mu ingredients were detected in the terminal herbal market. In comparison with DNA sequencing, this protocol simplifies procedures by eliminating the cumbersome workups and sophisticated instruments, and only a trace amount of DNA is required. The CCP-based method is particularly attractive because it can detect adulterants in admixture samples with high sensitivity. Therefore, the CCP-based detection system shows great potential for routine terminal market checks and drug safety controls.
SiMA: A simplified migration assay for analyzing neutrophil migration.
Weckmann, Markus; Becker, Tim; Nissen, Gyde; Pech, Martin; Kopp, Matthias V
2017-07-01
In lung inflammation, neutrophils are the first leukocytes migrating to an inflammatory site, eliminating pathogens by multiple mechanisms. The term "migration" describes several stages of neutrophil movement to reach the site of inflammation, of which the passage of the interstitium and basal membrane of the airway are necessary to reach the site of bronchial inflammation. Currently, several methods exist (e.g., Boyden Chamber, under-agarose assay, or microfluidic systems) to assess neutrophil mobility. However, these methods do not allow for parameterization on single cell level, that is, the individual neutrophil pathway analysis is still considered challenging. This study sought to develop a simplified yet flexible method to monitor and quantify neutrophil chemotaxis by utilizing commercially available tissue culture hardware, simple video microscopic equipment and highly standardized tracking. A chemotaxis 3D µ-slide (IBIDI) was used with different chemoattractants [interleukin-8 (IL-8), fMLP, and Leukotriene B4 (LTB 4 )] to attract neutrophils in different matrices like Fibronectin (FN) or human placental matrix. Migration was recorded for 60 min using phase contrast microscopy with an EVOS ® FL Cell Imaging System. The images were normalized and texture based image segmentation was used to generate neutrophil trajectories. Based on these spatio-temporal information a comprehensive parameter set is extracted from each time series describing the neutrophils motility, including velocity and directness and neutrophil chemotaxis. To characterize the latter one, a sector analysis was employed enabling the quantification of the neutrophils response to the chemoattractant. Using this hard- and software framework we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications (Prednisolone). Additionally, a comparison of four asthmatic and three non-asthmatic patients gives a first hint to the capability of SiMA assay in the context of migration based diagnostics. Using SiMA we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications, that is, Prednisolone induced a change of direction of migrating neutrophils in FN but no such effect was observed in human placental matrix. In addition, neutrophils of asthmatic individuals showed an increased proportion of cells migrating toward the vehicle. With the SiMA platform we presented a simplified but yet flexible platform for cost-effective tracking and quantification of neutrophil migration. The introduced method is based on a simple microscopic video stage, standardized, commercially available, µ-fluidic migration chambers and automated image analysis, and track validation software. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Library based x-ray scatter correction for dedicated cone beam breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less
Library based x-ray scatter correction for dedicated cone beam breast CT
Shi, Linxi; Karellas, Andrew; Zhu, Lei
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870
Traditional and modern plant breeding methods with examples in rice (Oryza sativa L.).
Breseghello, Flavio; Coelho, Alexandre Siqueira Guedes
2013-09-04
Plant breeding can be broadly defined as alterations caused in plants as a result of their use by humans, ranging from unintentional changes resulting from the advent of agriculture to the application of molecular tools for precision breeding. The vast diversity of breeding methods can be simplified into three categories: (i) plant breeding based on observed variation by selection of plants based on natural variants appearing in nature or within traditional varieties; (ii) plant breeding based on controlled mating by selection of plants presenting recombination of desirable genes from different parents; and (iii) plant breeding based on monitored recombination by selection of specific genes or marker profiles, using molecular tools for tracking within-genome variation. The continuous application of traditional breeding methods in a given species could lead to the narrowing of the gene pool from which cultivars are drawn, rendering crops vulnerable to biotic and abiotic stresses and hampering future progress. Several methods have been devised for introducing exotic variation into elite germplasm without undesirable effects. Cases in rice are given to illustrate the potential and limitations of different breeding approaches.
Review of qualitative approaches for the construction industry: designing a risk management toolbox.
Zalk, David M; Spee, Ton; Gillen, Matt; Lentz, Thomas J; Garrod, Andrew; Evans, Paul; Swuste, Paul
2011-06-01
This paper presents the framework and protocol design for a construction industry risk management toolbox. The construction industry needs a comprehensive, systematic approach to assess and control occupational risks. These risks span several professional health and safety disciplines, emphasized by multiple international occupational research agenda projects including: falls, electrocution, noise, silica, welding fumes, and musculoskeletal disorders. Yet, the International Social Security Association says, "whereas progress has been made in safety and health, the construction industry is still a high risk sector." Small- and medium-sized enterprises (SMEs) employ about 80% of the world's construction workers. In recent years a strategy for qualitative occupational risk management, known as Control Banding (CB) has gained international attention as a simplified approach for reducing work-related risks. CB groups hazards into stratified risk 'bands', identifying commensurate controls to reduce the level of risk and promote worker health and safety. We review these qualitative solutions-based approaches and identify strengths and weaknesses toward designing a simplified CB 'toolbox' approach for use by SMEs in construction trades. This toolbox design proposal includes international input on multidisciplinary approaches for performing a qualitative risk assessment determining a risk 'band' for a given project. Risk bands are used to identify the appropriate level of training to oversee construction work, leading to commensurate and appropriate control methods to perform the work safely. The Construction Toolbox presents a review-generated format to harness multiple solutions-based national programs and publications for controlling construction-related risks with simplified approaches across the occupational safety, health and hygiene professions.
A framework for performing workplace hazard and risk analysis: a participative ergonomics approach.
Morag, Ido; Luria, Gil
2013-01-01
Despite the unanimity among researchers about the centrality of workplace analysis based on participatory ergonomics (PE) as a basis for preventive interventions, there is still little agreement about the necessary of a theoretical framework for providing practical guidance. In an effort to develop a conceptual PE framework, the authors, focusing on 20 studies, found five primary dimensions for characterising an analytical structure: (1) extent of workforce involvement; (2) analysis duration; (3) diversity of reporter role types; (4) scope of analysis and (5) supportive information system for analysis management. An ergonomics analysis carried out in a chemical manufacturing plant serves as a case study for evaluating the proposed framework. The study simultaneously demonstrates the five dimensions and evaluates their feasibility. The study showed that managerial leadership was fundamental to the successful implementation of the analysis; that all job holders should participate in analysing their own workplace and simplified reporting methods contributed to a desirable outcome. This paper seeks to clarify the scope of workplace ergonomics analysis by offering a theoretical and structured framework for providing practical advice and guidance. Essential to successfully implementing the analytical framework are managerial involvement, participation of all job holders and simplified reporting methods.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Long-term evaluation of orbital dynamics in the Sun-planet system considering axial-tilt
NASA Astrophysics Data System (ADS)
Bakhtiari, Majid; Daneshjou, Kamran
2018-05-01
In this paper, the axial-tilt (obliquity) effect of planets on the motion of planets’ orbiter in prolonged space missions has been investigated in the presence of the Sun gravity. The proposed model is based on non-simplified perturbed dynamic equations of planetary orbiter motion. From a new point of view, in this work, the dynamic equations regarding a disturbing body in elliptic inclined three-dimensional orbit are derived. The accuracy of this non-simplified method is validated with dual-averaged method employed on a generalized Earth-Moon system. It is shown that the neglected short-time oscillations in dual-averaged technique can accumulate and propel to remarkable errors in the prolonged evolution. After validation, the effects of the planet’s axial-tilt on eccentricity, inclination and right ascension of the ascending node of the orbiter are investigated. Moreover, a generalized model is provided to study the effects of third-body inclination and eccentricity on orbit characteristics. It is shown that the planet’s axial-tilt is the key to facilitating some significant changes in orbital elements in long-term mission and short-time oscillations must be considered in accurate prolonged evaluation.
NASA Astrophysics Data System (ADS)
Vereecken, Luc; Peeters, Jozef
2003-09-01
The rigorous implementation of transition state theory (TST) for a reaction system with multiple reactant rotamers and multiple transition state conformers is discussed by way of a statistical rate analysis of the 1,5-H-shift in 1-butoxy radicals, a prototype reaction for the important class of H-shift reactions in atmospheric chemistry. Several approaches for deriving a multirotamer TST expression are treated: oscillator versus (hindered) internal rotor models; distinguishable versus indistinguishable atoms; and direct count methods versus degeneracy factors calculated by (simplified) direct count methods or from symmetry numbers and number of enantiomers, where applicable. It is shown that the various treatments are fully consistent, even if the TST expressions themselves appear different. The 1-butoxy H-shift reaction is characterized quantum chemically using B3LYP-DFT; the performance of this level of theory is compared to other methods. Rigorous application of the multirotamer TST methodology in an harmonic oscillator approximation based on this data yields a rate coefficient of k(298 K,1 atm)=1.4×105 s-1, and an Arrhenius expression k(T,1 atm)=1.43×1011 exp(-8.17 kcal mol-1/RT) s-1, which both closely match the experimental recommendations in the literature. The T-dependence is substantially influenced by the multirotamer treatment, as well as by the tunneling and fall-off corrections. The present results are compared to those of simplified TST calculations based solely on the properties of the lowest energy 1-butoxy rotamer.
L'her, Erwan; Martin-Babau, Jérôme; Lellouche, François
2016-12-01
Knowledge of patients' height is essential for daily practice in the intensive care unit. However, actual height measurements are unavailable on a daily routine in the ICU and measured height in the supine position and/or visual estimates may lack consistency. Clinicians do need simple and rapid methods to estimate the patients' height, especially in short height and/or obese patients. The objectives of the study were to evaluate several anthropometric formulas for height estimation on healthy volunteers and to test whether several of these estimates will help tidal volume setting in ICU patients. This was a prospective, observational study in a medical intensive care unit of a university hospital. During the first phase of the study, eight limb measurements were performed on 60 healthy volunteers and 18 height estimation formulas were tested. During the second phase, four height estimates were performed on 60 consecutive ICU patients under mechanical ventilation. In the 60 healthy volunteers, actual height was well correlated with the gold standard, measured height in the erect position. Correlation was low between actual and calculated height, using the hand's length and width, the index, or the foot equations. The Chumlea method and its simplified version, performed in the supine position, provided adequate estimates. In the 60 ICU patients, calculated height using the simplified Chumlea method was well correlated with measured height (r = 0.78; ∂ < 1 %). Ulna and tibia estimates also provided valuable estimates. All these height estimates allowed calculating IBW or PBW that were significantly different from the patients' actual weight on admission. In most cases, tidal volume set according to these estimates was lower than what would have been set using the actual weight. When actual height is unavailable in ICU patients undergoing mechanical ventilation, alternative anthropometric methods to obtain patient's height based on lower leg and on forearm measurements could be useful to facilitate the application of protective mechanical ventilation in a Caucasian ICU population. The simplified Chumlea method is easy to achieve in a bed-ridden patient and provides accurate height estimates, with a low bias.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
Research on key technology of yacht positioning based on binocular parallax
NASA Astrophysics Data System (ADS)
Wang, Wei; Wei, Ping; Liu, Zengzhi
2016-10-01
Yacht has become a fashionable way for entertainment. However, to obtain the precise location of a yacht docked at a port has become one of the concerns of a yacht manager. To deal with this issue, we adopt a positioning method based on the principle of binocular parallax and background difference in this paper. Binocular parallax uses cameras to get multi-dimensional perspective of the yacht based on geometric principle of imaging. In order to simplify the yacht localization problem, we install LED light indicator as the key point on a yacht. And let it flash at a certain frequency during day time and night time. After getting the distance between the LED and the cameras, locating the yacht is easy. Compared with other traditional positioning methods, this method is simpler and easier to implement. In this paper, we study the yacht positioning method using the LED indicator. Simulation experiment is done for a yacht model in the distance of 3 meters. The experimental result shows that our method is feasible and easy to implement with a small 15% positioning error.
Smith, Joseph M.; Mather, Martha E.
2012-01-01
Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.
Curves showing column strength of steel and duralumin tubing
NASA Technical Reports Server (NTRS)
Ross, Orrin E
1929-01-01
Given here are a set of column strength curves that are intended to simplify the method of determining the size of struts in an airplane structure when the load in the member is known. The curves will also simplify the checking of the strength of a strut if the size and length are known. With these curves, no computations are necessary, as in the case of the old-fashioned method of strut design. The process is so simple that draftsmen or others who are not entirely familiar with mechanics can check the strength of a strut without much danger of error.
Asymptotic approximations for pure bending of thin cylindrical shells
NASA Astrophysics Data System (ADS)
Coman, Ciprian D.
2017-08-01
A simplified partial wrinkling scenario for in-plane bending of thin cylindrical shells is explored by using several asymptotic strategies. The eighth-order boundary eigenvalue problem investigated here originates in the Donnel-Mushtari-Vlasov shallow shell theory coupled with a linear membrane pre-bifurcation state. It is shown that the corresponding neutral stability curve is amenable to a detailed asymptotic analysis based on the method of multiple scales. This is further complemented by an alternative WKB approximation that provides comparable information with significantly less effort.
Quantum chemical calculation of the equilibrium structures of small metal atom clusters
NASA Technical Reports Server (NTRS)
Kahn, L. R.
1982-01-01
Metal atom clusters are studied based on the application of ab initio quantum mechanical approaches. Because these large 'molecular' systems pose special practical computational problems in the application of the quantum mechanical methods, there is a special need to find simplifying techniques that do not compromise the reliability of the calculations. Research is therefore directed towards various aspects of the implementation of the effective core potential technique for the removal of the metal atom core electrons from the calculations.
2016-08-23
SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume
[Investigation of the safety of microbial biotechnological products and their hygienic regulation].
Omel'ianets', T H; Kovalenko, N K; Holovach, T M
2008-01-01
Peculiarities of influence of microbial preparations based on microorganisms of different taxonomic groups on the warm-blooded organisms are considered, that is necessary to take into account when developing the strategy of toxico-hygienic studying of these preparations and when substanting hygienic standards in industrial objects and in the environment. The possibility to simplify the methodical scheme of the toxicological estimation and the hygienic regulation of microbial preparations on the basis of soil nitrogen-fixing microorganisms is discussed.
Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V
2015-12-01
The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.
Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning
NASA Astrophysics Data System (ADS)
Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard
2016-10-01
Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.
FY16 Status Report on Development of Integrated EPP and SMT Design Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Sham, T. -L.; Wang, Y.
2016-08-01
The goal of the Elastic-Perfectly Plastic (EPP) combined integrated creep-fatigue damage evaluation approach is to incorporate a Simplified Model Test (SMT) data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The EPP methodology is based on the idea that creep damage and strain accumulation can be bounded by a properly chosen “pseudo” yield strength used in an elastic-perfectly plastic analysis, thus avoiding the need for stress classification. The originalmore » SMT approach is based on the use of elastic analysis. The experimental data, cycles to failure, is correlated using the elastically calculated strain range in the test specimen and the corresponding component strain is also calculated elastically. The advantage of this approach is that it is no longer necessary to use the damage interaction, or D-diagram, because the damage due to the combined effects of creep and fatigue are accounted in the test data by means of a specimen that is designed to replicate or bound the stress and strain redistribution that occurs in actual components when loaded in the creep regime. The reference approach to combining the two methodologies and the corresponding uncertainties and validation plans are presented. Results from recent key feature tests are discussed to illustrate the applicability of the EPP methodology and the behavior of materials at elevated temperature when undergoing stress and strain redistribution due to plasticity and creep.« less
Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay
2015-09-01
The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.
Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2013-01-01
Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives
Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne
2010-08-01
A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.
Estimation of Land Surface Temperature from GCOM-W1 AMSR2 Data over the Chinese Landmass
NASA Astrophysics Data System (ADS)
Zhou, Ji; Dai, Fengnan; Zhang, Xiaodong
2016-04-01
As one of the most important parameter at the interface between the earth's surface and atmosphere, land surface temperature (LST) plays a crucial role in many fields, such as climate change monitoring and hydrological modeling. Satellite remote sensing provides the unique possibility to observe LST of large regions at diverse spatial and temporal scales. Compared with thermal infrared (TIR) remote sensing, passive microwave (PW) remote sensing has a better ability in overcoming the influences of clouds; thus, it can be used to improve the temporal resolution of current satellite TIR LST. However, most of current methods for estimation LST from PW remote sensing are empirical and have unsatisfied generalization. In this study, a semi-empirical method is proposed to estimate LST from the observation of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the Global Change Observation Mission 1st-WATER "SHIZUKU" satellite (GCOM-W1). The method is based on the PW radiation transfer equation, which is simplified based on (1) the linear relationship between the emissivities of horizontal and vertical polarization channels at the same frequency and (2) the significant relationship between atmospheric parameters and the atmospheric water vapor content. An iteration approach is used to best fit the pixel-based coefficients in the simplified radiation transfer equation of the horizontal and vertical polarization channels at each frequency. Then an integration approach is proposed to generate the ensemble estimation from estimations of multiple frequencies for different land cover types. This method is trained with the AMSR2 brightness temperature and MODIS LST in 2013 over the entire Chinese landmass and then it is tested with the data in 2014. Validation based on in situ LSTs measured in northwestern China demonstrates that the proposed method has a better accuracy than the polarization radiation method, with a root-mean squared error of 3 K. Although the proposal method is applied to AMSR2 data, it has good ability to extend to other satellite PW sensors, such as the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) on board the Aqua satellite and the Special Sensor Microwave/Imager (SSM/I) on board the Defense Meteorological Satellite Program (DMSP) satellite. It would be beneficial in providing LST to applications at continental and global scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana, Scott; Van Dam, Jeroen J; Damiani, Rick R
As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, the National Renewable Energy Laboratory (NREL) tested a small horizontal-axis wind turbine in the field at the National Wind Technology Center. The test turbine was a 2.1-kW downwind machine mounted on an 18-m multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the outputmore » of an aeroelastic model of the turbine. In particular, we compared fatigue loads as measured in the field, predicted by the aeroelastic model, and calculated using the simplified design equations. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads and a discussion about the simplified design equations is discussed.« less
A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi
2007-01-01
A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.
NASA Astrophysics Data System (ADS)
Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.
2018-03-01
In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.
Solar cell circuit and method for manufacturing solar cells
NASA Technical Reports Server (NTRS)
Mardesich, Nick (Inventor)
2010-01-01
The invention is a novel manufacturing method for making multi-junction solar cell circuits that addresses current problems associated with such circuits by allowing the formation of integral diodes in the cells and allows for a large number of circuits to readily be placed on a single silicon wafer substrate. The standard Ge wafer used as the base for multi-junction solar cells is replaced with a thinner layer of Ge or a II-V semiconductor material on a silicon/silicon dioxide substrate. This allows high-voltage cells with multiple multi-junction circuits to be manufactured on a single wafer, resulting in less array assembly mass and simplified power management.
Video Vectorization via Tetrahedral Remeshing.
Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping
2017-02-09
We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
Three-dimensional information hierarchical encryption based on computer-generated holograms
NASA Astrophysics Data System (ADS)
Kong, Dezhao; Shen, Xueju; Cao, Liangcai; Zhang, Hao; Zong, Song; Jin, Guofan
2016-12-01
A novel approach for encrypting three-dimensional (3-D) scene information hierarchically based on computer-generated holograms (CGHs) is proposed. The CGHs of the layer-oriented 3-D scene information are produced by angular-spectrum propagation algorithm at different depths. All the CGHs are then modulated by different chaotic random phase masks generated by the logistic map. Hierarchical encryption encoding is applied when all the CGHs are accumulated one by one, and the reconstructed volume of the 3-D scene information depends on permissions of different users. The chaotic random phase masks could be encoded into several parameters of the chaotic sequences to simplify the transmission and preservation of the keys. Optical experiments verify the proposed method and numerical simulations show the high key sensitivity, high security, and application flexibility of the method.
The Van Hiele geometry thinking levels of mild mental retardation students
NASA Astrophysics Data System (ADS)
Shomad, Z. A.; Kusmayadi, T. A.; Riyadi
2017-12-01
This research is to investigate the level of mild mental retardation geometry students thinking. This research focuses on the geometry thinking level based on Van Hiele theory. This study uses qualitative methods with case study strategy. Data obtained from observation and tests result. The subjects are 12 mental retardation students. The result show that ability of mild mental retardation students with each other is different but have same level of level thinking geometry. The geometry thinking level of mental retardation students was identified in level 1 of the Van Hiele theory. Based on the level thinking geometry of mental retardation students simplify geometry thinking teachers in selecting appropriate learning methods, choose the materials in accordance with ability, and can modify the material following the geometry thinking level of mental retardation students.
User-Centered Design for Psychosocial Intervention Development and Implementation
Lyon, Aaron R.; Koerner, Kelly
2018-01-01
The current paper articulates how common difficulties encountered when attempting to implement or scale-up evidence-based treatments are exacerbated by fundamental design problems, which may be addressed by a set of principles and methods drawn from the contemporary field of user-centered design. User-centered design is an approach to product development that grounds the process in information collected about the individuals and settings where products will ultimately be used. To demonstrate the utility of this perspective, we present four design concepts and methods: (a) clear identification of end users and their needs, (b) prototyping/rapid iteration, (c) simplifying existing intervention parameters/procedures, and (d) exploiting natural constraints. We conclude with a brief design-focused research agenda for the developers and implementers of evidence-based treatments. PMID:29456295
Geometric Model of Induction Heating Process of Iron-Based Sintered Materials
NASA Astrophysics Data System (ADS)
Semagina, Yu V.; Egorova, M. A.
2018-03-01
The article studies the issue of building multivariable dependences based on the experimental data. A constructive method for solving the issue is presented in the form of equations of (n-1) – surface compartments of the extended Euclidean space E+n. The dimension of space is taken to be equal to the sum of the number of parameters and factors of the model of the system being studied. The basis for building multivariable dependencies is the generalized approach to n-space used for the surface compartments of 3D space. The surface is designed on the basis of the kinematic method, moving one geometric object along a certain trajectory. The proposed approach simplifies the process aimed at building the multifactorial empirical dependencies which describe the process being investigated.
Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi
2015-01-01
We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
Scarless assembly of unphosphorylated DNA fragments with a simplified DATEL method.
Ding, Wenwen; Weng, Huanjiao; Jin, Peng; Du, Guocheng; Chen, Jian; Kang, Zhen
2017-05-04
Efficient assembly of multiple DNA fragments is a pivotal technology for synthetic biology. A scarless and sequence-independent DNA assembly method (DATEL) using thermal exonucleases has been developed recently. Here, we present a simplified DATEL (sDATEL) for efficient assembly of unphosphorylated DNA fragments with low cost. The sDATEL method is only dependent on Taq DNA polymerase and Taq DNA ligase. After optimizing the committed parameters of the reaction system such as pH and the concentration of Mg 2+ and NAD+, the assembly efficiency was increased by 32-fold. To further improve the assembly capacity, the number of thermal cycles was optimized, resulting in successful assembly 4 unphosphorylated DNA fragments with an accuracy of 75%. sDATEL could be a desirable method for routine manual and automated assembly.
An improved task-role-based access control model for G-CSCW applications
NASA Astrophysics Data System (ADS)
He, Chaoying; Chen, Jun; Jiang, Jie; Han, Gang
2005-10-01
Access control is an important and popular security mechanism for multi-user applications. GIS-based Computer Supported Cooperative Work (G-CSCW) application is one of such applications. This paper presents an improved Task-Role-Based Access Control (X-TRBAC) model for G-CSCW applications. The new model inherits the basic concepts of the old ones, such as role and task. Moreover, it has introduced two concepts, i.e. object hierarchy and operation hierarchy, and the corresponding rules to improve the efficiency of permission definition in access control models. The experiments show that the method can simplify the definition of permissions, and it is more applicable for G-CSCW applications.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
Begum, S; Achary, P Ganga Raju
2015-01-01
Quantitative structure-activity relationship (QSAR) models were built for the prediction of inhibition (pIC50, i.e. negative logarithm of the 50% effective concentration) of MAP kinase-interacting protein kinase (MNK1) by 43 potent inhibitors. The pIC50 values were modelled with five random splits, with the representations of the molecular structures by simplified molecular input line entry system (SMILES). QSAR model building was performed by the Monte Carlo optimisation using three methods: classic scheme; balance of correlations; and balance correlation with ideal slopes. The robustness of these models were checked by parameters as rm(2), r(*)m(2), [Formula: see text] and randomisation technique. The best QSAR model based on single optimal descriptors was applied to study in vitro structure-activity relationships of 6-(4-(2-(piperidin-1-yl) ethoxy) phenyl)-3-(pyridin-4-yl) pyrazolo [1,5-a] pyrimidine derivatives as a screening tool for the development of novel potent MNK1 inhibitors. The effects of alkyl group, -OH, -NO2, F, Cl, Br, I, etc. on the IC50 values towards the inhibition of MNK1 were also reported.
Comparison of two trajectory based models for locating particle sources for two rural New York sites
NASA Astrophysics Data System (ADS)
Zhou, Liming; Hopke, Philip K.; Liu, Wei
Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.
Wang, Rui; Zhang, Fang; Wang, Liu; Qian, Wenjuan; Qian, Cheng; Wu, Jian; Ying, Yibin
2017-04-18
On-site monitoring the plantation of genetically modified (GM) crops is of critical importance in agriculture industry throughout the world. In this paper, a simple, visual and instrument-free method for instant on-site detection of GTS 40-3-2 soybean has been developed. It is based on body-heat recombinase polymerase amplification (RPA) and followed with naked-eye detection via fluorescent DNA dye. Combining with extremely simplified sample preparation, the whole detection process can be accomplished within 10 min and the fluorescent results can be photographed by an accompanied smart phone. Results demonstrated a 100% detection rate for screening of practical GTS 40-3-2 soybean samples by 20 volunteers under different ambient temperatures. This method is not only suitable for on-site detection of GM crops but also demonstrates great potential to be applied in other fields.
Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples
2012-01-01
Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466
Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements
NASA Technical Reports Server (NTRS)
Conel, J. E.
1985-01-01
Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.
Ikigai, H; Seki, K; Nishihara, S; Masuda, S
1988-01-01
A simplified method for preparation of concentrated exoproteins including protein A and alpha-toxin produced by Staphylococcus aureus was successfully devised. The concentrated proteins were obtained by cultivating S. aureus organisms on the surface of a liquid medium-containing cellophane bag enclosed in a sterilized glass flask. With the same amount of medium, the total amount of proteins obtained by the method presented here was identical with that obtained by conventional liquid culture. The concentration of proteins obtained by the method, however, was high enough to observe their distinct bands stained on polyacrylamide gel electrophoresis. This method was considered quite useful not only for large-scale cultivation for the purification of staphylococcal proteins but also for small-scale study using the proteins. The precise description of the method was presented and its possible usefulness was discussed.
NASA Astrophysics Data System (ADS)
Tang, Xiaolin; Yang, Wei; Hu, Xiaosong; Zhang, Dejiu
2017-02-01
In this study, based on our previous work, a novel simplified torsional vibration dynamic model is established to study the torsional vibration characteristics of a compound planetary hybrid propulsion system. The main frequencies of the hybrid driveline are determined. In contrast to vibration characteristics of the previous 16-degree of freedom model, the simplified model can be used to accurately describe the low-frequency vibration property of this hybrid powertrain. This study provides a basis for further vibration control of the hybrid powertrain during the process of engine start/stop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
NASA Astrophysics Data System (ADS)
Su, Ray Kai Leung; Lee, Chien-Liang
2013-06-01
This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.
Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool
NASA Astrophysics Data System (ADS)
Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco
2017-07-01
SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.
Kressirer, Sabine; Kralisch, Dana; Stark, Annegret; Krtschil, Ulrich; Hessel, Volker
2013-05-21
In order to investigate the potential for process intensification, various reaction conditions were applied to the Kolbe-Schmitt synthesis starting from resorcinol. Different CO₂ precursors such as aqueous potassium hydrogencarbonate, hydrogencarbonate-based ionic liquids, DIMCARB, or sc-CO₂, the application of microwave irradiation for fast volumetric heating of the reaction mixture, and the effect of harsh reaction conditions were investigated. The experiments, carried out in conventional batch-wise as well as in continuously operated microstructured reactors, aimed at the development of an environmentally benign process for the preparation of 2,4-dihydroxybenzoic acid. To provide decision support toward a green process design, a research-accompanying simplified life cycle assessment (SLCA) was performed throughout the whole investigation. Following this approach, it was found that convective heating methods such as oil bath or electrical heating were more beneficial than the application of microwave irradiation. Furthermore, the consideration of workup procedures was crucial for a holistic view on the environmental burdens.
A simplified DEM-CFD approach for pebble bed reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Ji, W.
In pebble bed reactors (PBR's), the pebble flow and the coolant flow are coupled with each other through coolant-pebble interactions. Approaches with different fidelities have been proposed to simulate similar phenomena. Coupled Discrete Element Method-Computational Fluid Dynamics (DEM-CFD) approaches are widely studied and applied in these problems due to its good balance between efficiency and accuracy. In this work, based on the symmetry of the PBR geometry, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without significant loss of accuracy. Pebble flow is simulated by a full 3-D DEM, while the coolant flow field is calculatedmore » with a 2-D CFD simulation by averaging variables along the annular direction in the cylindrical geometry. Results show that this simplification can greatly enhance the efficiency for cylindrical core, which enables further inclusion of other physics such as thermal and neutronic effect in the multi-physics simulations for PBR's. (authors)« less
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P
2011-06-01
Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Mathieu, C; Cuddihy, R; Arakaki, R F; Belin, R M; Planquois, J-M; Lyons, J N; Heilmann, C R
2009-09-01
Insulin initiation and optimization is a challenge for patients with type 2 diabetes. Our objective was to determine whether safety and efficacy of AIR inhaled insulin (Eli Lilly and Co., Indianapolis, IN) (AIR is a registered trademark of Alkermes, Inc., Cambridge, MA) using a simplified regimen was noninferior to an intensive regimen. This was an open-label, randomized study in insulin-naive adults not optimally controlled by oral antihyperglycemic medications. Simplified titration included a 6 U per meal AIR insulin starting dose. Individual doses were adjusted at mealtime in 2-U increments from the previous day's four-point self-monitored blood glucose (SMBG) (total < or =6 U). Starting Air insulin doses for intensive titration were based on fasting blood glucose, gender, height, and weight. Patients conducted four-point SMBG daily for the study duration. Insulin doses were titrated based on the previous 3 days' mean SMBG (total < or =8 U). End point hemoglobin A1C (A1C) was 7.07 +/- 0.09% and 6.87 +/- 0.09% for simplified (n = 178) and intensive (n = 180) algorithms, respectively. Noninferiority between algorithms was not established. The fasting blood glucose (least squares mean +/- standard error) values for the simplified (137.27 +/- 3.42 mg/dL) and intensive (133.13 +/- 3.42 mg/dL) algorithms were comparable. Safety profiles were comparable. The hypoglycemic rate at 4, 8, 12, and 24 weeks was higher in patients receiving intensive titration (all P < .0001). The nocturnal hypoglycemic rate for patients receiving intensive titration was higher than for those receiving simplified titration at 8 (P < 0.015) and 12 weeks (P < 0.001). Noninferiority between the algorithms, as measured by A1C, was not demonstrated. This finding re-emphasizes the difficulty of identifying optimal, simplified insulin regimens for patients.
Text-Based Recall and Extra-Textual Generations Resulting from Simplified and Authentic Texts
ERIC Educational Resources Information Center
Crossley, Scott A.; McNamara, Danielle S.
2016-01-01
This study uses a moving windows self-paced reading task to assess text comprehension of beginning and intermediate-level simplified texts and authentic texts by L2 learners engaged in a text-retelling task. Linear mixed effects (LME) models revealed statistically significant main effects for reading proficiency and text level on the number of…
Simplified thermodynamic functions for vapor-liquid phase separation and fountain effect pumps
NASA Technical Reports Server (NTRS)
Yuan, S. W. K.; Hepler, W. A.; Frederking, T. H. K.
1984-01-01
He-4 fluid handling devices near 2 K require novel components for non-Newtonian fluid transport in He II. Related sizing of devices has to be based on appropriate thermophysical property functions. The present paper presents simplified equilibrium state functions for porous media components which serve as vapor-liquid phase separators and fountain effect pumps.
ERIC Educational Resources Information Center
Walsh, John P.; Sun, Jerry Chih-Yuan; Riconscente, Michelle
2011-01-01
Digital technologies can improve student interest and knowledge in science. However, researching the vast number of websites devoted to science education and integrating them into undergraduate curricula is time-consuming. We developed an Adobe ColdFusion- and Adobe Flash-based system for simplifying the construction, use, and delivery of…
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei
2018-03-01
Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Simplified Physics Based Models Research Topical Report on Task #2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Srikanta; Ganesh, Priya
We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and themore » nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.« less
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition