Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers
NASA Technical Reports Server (NTRS)
Schiff, Conrad
2006-01-01
This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although the process model ("prop-burn-prop") that was studied is very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [TD 9534] RIN 1545-BD81 Methods... describes corrections to final regulations (TD 9534) relating to the methods of accounting, including the inventory methods, to be used by corporations that acquire the assets of other corporations in certain...
Martin, Emma C; Aarons, Leon; Yates, James W T
2016-07-01
Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.
Double Photoionization of helium atom using Screening Potential Approach
NASA Astrophysics Data System (ADS)
Saha, Haripada
2014-05-01
The triple differential cross section for double Photoionization of helium atom will be investigated using our recently extended MCHF method. It is well known that electron correlation effects in both the initial and the final states are very important. To incorporate these effects we will use the multi-configuration Hartree-Fock method to account for electron correlation in the initial state. The electron correlation in the final state will be taken into account using the angle-dependent screening potential approximation. The triple differential cross section (TDCS) will be calculated for 20 eV photon energy, which has experimental results. Our results will be compared with available experimental and the theoretical observations.
Recent trends related to the use of formal methods in software engineering
NASA Technical Reports Server (NTRS)
Prehn, Soren
1986-01-01
An account is given of some recent developments and trends related to the development and use of formal methods in software engineering. Ongoing activities in Europe are focussed on, since there seems to be a notable difference in attitude towards industrial usage of formal methods in Europe and in the U.S. A more detailed account is given of the currently most widespread formal method in Europe: the Vienna Development Method. Finally, the use of Ada is discussed in relation to the application of formal methods, and the potential for constructing Ada-specific tools based on that method is considered.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
... study employed the same Activity Based Cost (ABC) accounting method detailed in the Final Rule establishing the process for setting fees (75 FR 24796 (May 6, 2010)). The ABC methodology is consistent with widely accepted accounting principles and complies with the provisions of 31 U.S.C. 9701 and other...
Computation of subsonic flow around airfoil systems with multiple separation
NASA Technical Reports Server (NTRS)
Jacob, K.
1982-01-01
A numerical method for computing the subsonic flow around multi-element airfoil systems was developed, allowing for flow separation at one or more elements. Besides multiple rear separation also sort bubbles on the upper surface and cove bubbles can approximately be taken into account. Also, compressibility effects for pure subsonic flow are approximately accounted for. After presentation the method is applied to several examples and improved in some details. Finally, the present limitations and desirable extensions are discussed.
77 FR 28790 - Medical Loss Ratio Requirements Under the Patient Protection and Affordable Care Act
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... information will be available on the HHS Web site, HealthCare.gov , providing an efficient method of public... Sources, Methods, and Limitations On December 1, 2010, we published an interim final rule (75 FR 74864... impacts of the MLR rule, the data contain certain limitations; we developed imputation methods to account...
20 CFR 404.1694 - Final accounting by the State.
Code of Federal Regulations, 2010 CFR
2010-04-01
... function. Disputes concerning final accounting issues which cannot be resolved between the State and us... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Final accounting by the State. 404.1694... DISABILITY INSURANCE (1950- ) Determinations of Disability Assumption of Disability Determination Function...
20 CFR 416.1094 - Final accounting by the State.
Code of Federal Regulations, 2010 CFR
2010-04-01
... function. Disputes concerning final accounting issues which cannot be resolved between the State and us... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Final accounting by the State. 416.1094... AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability Determination Function...
Accounting for location and timing in NO{sub x} emission trading programs. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, A.L.
1997-12-01
This report describes approaches to designing emission trading programs for nitrogen oxides (NO{sub x}) to account for the locations of emission sources. When a trading region is relatively small, program managers can assume that the location of the sources engaging in trades has little or no effect. However, if policy makers extend the program to larger regions, this assumption may be questioned. Therefore, EPRI has undertaken a survey of methods for incorporating location considerations into trading programs. Application of the best method may help to preserve, and even enhance, the flexibility and savings afforded utilities by emission trading.
NASA Astrophysics Data System (ADS)
Koryanov, V.; Kazakovtsev, V.; Harri, A.-M.; Heilimo, J.; Haukka, H.; Aleksashkin, S.
2015-10-01
This research work is devoted to analysis of angular motion of the landing vehicle (LV) with an inflatable braking device (IBD), taking into account the influence of the wind load on the final stage of the movement. Using methods to perform a calculation of parameters of angular motion of the landing vehicle with an inflatable braking device based on the availability of small asymmetries, which are capable of complex dynamic phenomena, analyzes motion of the landing vehicle at the final stage of motion in the atmosphere.
NASA Astrophysics Data System (ADS)
Sadeghipour, Negar; Davis, Scott C.; Tichauer, Kenneth M.
2018-02-01
Dynamic fluorescence imaging approaches can be used to estimate the concentration of cell surface receptors in vivo. Kinetic models are used to generate the final estimation by taking the targeted imaging agent concentration as a function of time. However, tissue absorption and scattering properties cause the final readout signal to be on a different scale than the real fluorescent agent concentration. In paired-agent imaging approaches, simultaneous injection of a suitable control imaging agent with a targeted one can account for non-specific uptake and retention of the targeted agent. Additionally, the signal from the control agent can be a normalizing factor to correct for tissue optical property differences. In this study, the kinetic model used for paired-agent imaging analysis (i.e., simplified reference tissue model) is modified and tested in simulation and experimental data in a way that accounts for the scaling correction within the kinetic model fit to the data to ultimately extract an estimate of the targeted biomarker concentration.
ERIC Educational Resources Information Center
VanderLaan, Ski R.
2010-01-01
This mixed methods study (Creswell, 2008) was designed to test the influence of collaborative testing on learning using a quasi-experimental approach. This study used a modified embedded mixed method design in which the qualitative and quantitative data, associated with the secondary questions, provided a supportive role in a study based primarily…
Self-efficacy is independently associated with brain volume in older women
Davis, Jennifer C.; Nagamatsu, Lindsay S.; Hsu, Chun Liang; Beattie, B. Lynn; Liu-Ambrose, Teresa
2015-01-01
Background Aging is highly associated with neurodegeneration and atrophy of the brain. Evidence suggests that personality variables are risk factors for reduced brain volume. We examine whether falls-related self-efficacy is independently associated with brain volume. Method A cross-sectional analysis of whether falls-related self-efficacy is independently associated with brain volumes (total, grey, and white matter). Three multivariate regression models were constructed. Covariates included in the models were age, global cognition, systolic blood pressure, functional comorbidity index, and current physical activity level. MRI scans were acquired from 79 community-dwelling senior women aged 65 to 75 years old. Falls-related self-efficacy was assessed by the Activities Specific Balance Confidence (ABC) Scale. Results After accounting for covariates, falls-related self-efficacy was independently associated with both total brain volume and total grey matter volume. The final model for total brain volume accounted for 17% of the variance, with the ABC score accounting for 8%. For total grey matter volume, the final model accounted for 24% of the variance, with the ABC score accounting for 10%. Conclusion We provide novel evidence that falls-related self-efficacy, a modifiable risk factor for healthy aging, is positively associated with total brain volume and total grey matter volume. Trial Registration ClinicalTrials.gov Identifier: NCT00426881. PMID:22436405
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... Accounting Oversight Board; Order Approving Proposed Board Funding Final Rules for Allocation of the Board's Accounting Support Fee Among Issuers, Brokers, and Dealers, and Other Amendments to the Board's Funding Rules August 18, 2011. I. Introduction On June 21, 2011, the Public Company Accounting Oversight Board (the...
Development of a practical costing method for hospitals.
Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei
2006-03-01
To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
... Accounting Oversight Board; Notice of Filing of Proposed Board Funding Final Rules for Allocation of the...'' means the portion of the accounting support fee established by the Board that is to be allocated among... support fee'' means the portion of the accounting support fee established by the Board that is to be...
Report #2003-1-00048, Jan 21, 2003. The Program’s financial statements are presented as an enterprise fund using the accrual method of accounting whereby revenues are recorded when earned and expenses are recorded when the related liability is incurred.
A Commentary on Literacy Narratives as Sponsors of Literacy
ERIC Educational Resources Information Center
Brandt, Deborah
2015-01-01
This brief commentary first clarifies Brandt's concept of sponsors of literacy in light of the way the concept has been taken up in writing studies. Then it treats Brandt's methods for handling accounts of literacy learning in comparison with other ways of analyzing biographical material. Finally it takes up Lawrence's argument about literacy…
Overload retardation due to plasticity-induced crack closure
NASA Technical Reports Server (NTRS)
Fleck, N. A.; Shercliff, H. R.
1989-01-01
Experiments are reported which show that plasticity-induced crack closure can account for crack growth retardation following an overload. The finite element method is used to provide evidence which supports the experimental observations of crack closure. Finally, a simple model is presented which predicts with limited success the retardation transient following an overload.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
...This document contains final regulations that provide guidance on the application of sections 162(a) and 263(a) of the Internal Revenue Code (Code) to amounts paid to acquire, produce, or improve tangible property. The final regulations clarify and expand the standards in the current regulations under sections 162(a) and 263(a). These final regulations replace and remove temporary regulations under sections 162(a) and 263(a) and withdraw proposed regulations that cross referenced the text of those temporary regulations. This document also contains final regulations under section 167 regarding accounting for and retirement of depreciable property and final regulations under section 168 regarding accounting for property under the Modified Accelerated Cost Recovery System (MACRS) other than general asset accounts. The final regulations will affect all taxpayers that acquire, produce, or improve tangible property. These final regulations do not finalize or remove the 2011 temporary regulations under section 168 regarding general asset accounts and disposition of property subject to section 168, which are addressed in the notice of proposed rulemaking on this subject in the Proposed Rules section in this issue of the Federal Register.
Code of Federal Regulations, 2010 CFR
2010-10-01
... unit general and administrative expenses to final cost objectives. 9904.410 Section 9904.410 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING...
Accounting for interim safety monitoring of an adverse event upon termination of a clinical trial.
Dallas, Michael J
2008-01-01
Upon termination of a clinical trial that uses interim evaluations to determine whether the trial can be stopped, a proper statistical analysis must account for the interim evaluations. For example, in a group-sequential design where the efficacy of a treatment regimen is evaluated at interim stages, and the opportunity to stop the trial based on positive efficacy findings exists, the terminal p-value, point estimate, and confidence limits of the outcome of interest must be adjusted to eliminate bias. While it is standard practice to adjust terminal statistical analyses due to opportunities to stop for "positive" findings, adjusting due to opportunities to stop for "negative" findings is also important. Stopping rules for negative findings are particularly useful when monitoring a specific rare serious adverse event in trials designed to show safety with respect to the event. In these settings, establishing conservative stopping rules are appropriate, and therefore accounting for the interim monitoring can have a substantial effect on the final results. Here I present a method to account for interim safety monitoring and illustrate its usefulness. The method is demonstrated to have advantages over methodology that does not account for interim monitoring.
Application of an enriched FEM technique in thermo-mechanical contact problems
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Bahmani, B.
2018-02-01
In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.
Web-Based Honorarium Confirmation System Prototype
NASA Astrophysics Data System (ADS)
Wisswani, N. W.; Catur Bawa, I. G. N. B.
2018-01-01
Improving services in academic environment can be applied by regulating salary payment process for all employees. As a form of control to maintain financial transparency, employees should have information concerning salary payment process. Currently, notification process of committee honorarium will be accepted by the employees in a manual manner. The salary will be received by the employee bank account and to know its details, they should go to the accounting unit to find out further information. Though there are some employees entering the accounting unit, they still find difficulty to obtain information about detailed honor information that they received in their accounts. This can be caused by many data collected and to be managed. Based on this issue, this research will design a prototype of web-based system for accounting unit system in order to provide detailed financial transaction confirmation to employee bank accounts that have been informed through mobile banking system. This prototype will be developed with Waterfall method through testing on final users after it is developed through PHP program with MySQL as DBMS
78 FR 13521 - Great Lakes Pilotage Rates-2013 Annual Review and Adjustment
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-28
... Reconciliation Act CPA Certified Public Accountant CPI Consumer Price Index E.O. Executive Order FR Federal... contract with independent accountants to assist in that review. This final rule is based on the review of... reports of the independent accountants, before the review is finalized. Comments by the pilots...
The Comprehension of Rapid Speech by the Blind: Part III. Final Report.
ERIC Educational Resources Information Center
Foulke, Emerson
Accounts of completed and ongoing research conducted from 1964 to 1968 are presented on the subject of accelerated speech as a substitute for the written word. Included are a review of the research on intelligibility and comprehension of accelerated speech, some methods for controlling the word rate of recorded speech, and a comparison of…
Thermal conduction and gravitational collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrera, L.; Jimenez, J.; Esculpi, M.
1987-11-15
A method used to study the evolution of radiating spheres, reported some years ago by Herrera, Jimenez, and Ruggeri, is extended to the case in which thermal conduction within the sphere is taken into account. By means of an explicit example it is shown that heat flow, if present, may play an important role, affecting the final outcome of collapse.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
... heating exceeds the high-stage compressor capacity for cooling. Finally, the test procedure must account... test method to cover Hallowell's three-capacity compressor. The two (of three potential) active stages... pumps for the heating mode as follows: a. Conduct one Maximum Temperature Test (H0 1 ), two High...
4 CFR 22.3 - Appeals-How Taken [Rule 3].
Code of Federal Regulations, 2014 CFR
2014-01-01
... not issued a final decision within a reasonable time, taking into account such factors as the size and...., UPS or FedEx), facsimile, or e-mail, although e-mail is the preferred method of delivery in all Board matters. The use of first class or parcel post mail is strongly discouraged because the delivery delays...
4 CFR 22.3 - Appeals-How Taken [Rule 3].
Code of Federal Regulations, 2013 CFR
2013-01-01
... not issued a final decision within a reasonable time, taking into account such factors as the size and...., UPS or FedEx), facsimile, or e-mail, although e-mail is the preferred method of delivery in all Board matters. The use of first class or parcel post mail is strongly discouraged because the delivery delays...
4 CFR 22.3 - Appeals-How Taken [Rule 3].
Code of Federal Regulations, 2012 CFR
2012-01-01
... not issued a final decision within a reasonable time, taking into account such factors as the size and...., UPS or FedEx), facsimile, or e-mail, although e-mail is the preferred method of delivery in all Board matters. The use of first class or parcel post mail is strongly discouraged because the delivery delays...
4 CFR 22.3 - Appeals-How Taken [Rule 3].
Code of Federal Regulations, 2010 CFR
2010-01-01
... not issued a final decision within a reasonable time, taking into account such factors as the size and...., UPS or FedEx), facsimile, or e-mail, although e-mail is the preferred method of delivery in all Board matters. The use of first class or parcel post mail is strongly discouraged because the delivery delays...
4 CFR 22.3 - Appeals-How Taken [Rule 3].
Code of Federal Regulations, 2011 CFR
2011-01-01
... not issued a final decision within a reasonable time, taking into account such factors as the size and...., UPS or FedEx), facsimile, or e-mail, although e-mail is the preferred method of delivery in all Board matters. The use of first class or parcel post mail is strongly discouraged because the delivery delays...
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
Conservation: Toward firmer ground
NASA Technical Reports Server (NTRS)
1975-01-01
The following aspects of energy conservation were reviewed in order to place the problems in proper perspective: history and goals, conservation accounting-criteria, and a method to overcome obstacles. The effect of changing prices and available supplies of energy sources and their causes on consumption levels during the last few decades were described. Some examples of attainable conservation goals were listed and justified. A number of specific criteria applicable to conservation accounting were given. Finally, a discussion was presented to relate together the following aspects of energy conservation: widespread impact, involvement of government, industry, politics, moral and ethical aspects, urgency and time element.
Boundary element analysis of post-tensioned slabs
NASA Astrophysics Data System (ADS)
Rashed, Youssef F.
2015-06-01
In this paper, the boundary element method is applied to carry out the structural analysis of post-tensioned flat slabs. The shear-deformable plate-bending model is employed. The effect of the pre-stressing cables is taken into account via the equivalent load method. The formulation is automated using a computer program, which uses quadratic boundary elements. Verification samples are presented, and finally a practical application is analyzed where results are compared against those obtained from the finite element method. The proposed method is efficient in terms of computer storage and processing time as well as the ease in data input and modifications.
Do reading and spelling share a lexicon?
Jones, Angela C; Rawson, Katherine A
2016-05-01
In the reading and spelling literature, an ongoing debate concerns whether reading and spelling share a single orthographic lexicon or rely upon independent lexica. Available evidence tends to support a single lexicon account over an independent lexica account, but evidence is mixed and open to alternative explanation. In the current work, we propose another, largely ignored account--separate-but-shared lexica--according to which reading and spelling have separate orthographic lexica, but information can be shared between them. We report three experiments designed to competitively evaluate these three theoretical accounts. In each experiment, participants learned new words via reading training and/or spelling training. The key manipulation concerned the amount of reading versus spelling practice a given item received. Following training, we assessed both response time and accuracy on final outcome measures of reading and spelling. According to the independent lexica account, final performance in one modality will not be influenced by the level of practice in the other modality. According to the single lexicon account, final performance will depend on the overall amount of practice regardless of modality. According to the separate-but-shared account, final performance will be influenced by the level of practice in both modalities but will benefit more from same-modality practice. Results support the separate-but-shared account, indicating that reading and spelling rely upon separate lexica, but information can be shared between them. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Saat, Maisarah Mohamed; Yusoff, Rosman Md.; Panatik, Siti Aisyah
2014-01-01
Studies (for example, Dellaportas in Making a difference with a discrete course on accounting ethics. "J Bus Ethics" 65(4):391-404, 2006; Saat in "An investigation of the effects of a moral education program on the ethical development of Malaysian future accountants," 2010) on final year accounting students show that industrial…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-03-22
A grid-connected Integrated Community Energy System (ICES) with a coal-burning power plant located on the University of Minnesota campus is planned. The cost benefit analysis performed for this ICES, the cost accounting methods used, and a computer simulation of the operation of the power plant are described. (LCL)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-26
... excludes (1) polyethylene bags that are not printed with logos or store names and that are closeable with... comparison methodology to TCI's targeted sales and the average-to-average comparison methodology to TCI's non... average-to-average comparison method does not account for such price differences and results in the...
Zhao, Xu; Yang, Hong; Yang, Zhifeng; Chen, Bin; Qin, Yan
2010-12-01
The virtual water strategy which advocates importing water intensive products and exporting products with low water intensity is gradually accepted as one of the options for solving water crisis in severely water scarce regions. However, if we count the virtual water embodied in imported products as the water saved for a region, we might overestimate the saving by including the virtual water that is later re-exported in association with the proceeded products made from the originally imported products. This problem can be avoided by accounting for the saved water through calculating water footprint (WF) in domestic final consumptive products. In this paper, an input-output analysis (IOA) based on the water footprint accounting framework is built to account for WF and virtual water trade of final consumptive products in the water stressed Haihe River basin in China for the year 1997, 2000, and 2002. The input-output transaction tables of the three years are constructed. The results show WF of 46.57, 44.52, and 42.71 billion m(3) for the three years, respectively. These volumes are higher than the water used directly in the corresponding years in the basin. A WF intensity (WFI) indicator is then used to assess if the economic activities in the basin are consistent with the virtual water strategy. The temporal change of the WFI is also decomposed by the index number analysis method. The results showed that the basin was silently importing virtual water through the trade of raw and processed food commodities under the background of the whole economic circulation.
Multiple Testing of Gene Sets from Gene Ontology: Possibilities and Pitfalls.
Meijer, Rosa J; Goeman, Jelle J
2016-09-01
The use of multiple testing procedures in the context of gene-set testing is an important but relatively underexposed topic. If a multiple testing method is used, this is usually a standard familywise error rate (FWER) or false discovery rate (FDR) controlling procedure in which the logical relationships that exist between the different (self-contained) hypotheses are not taken into account. Taking those relationships into account, however, can lead to more powerful variants of existing multiple testing procedures and can make summarizing and interpreting the final results easier. We will show that, from the perspective of interpretation as well as from the perspective of power improvement, FWER controlling methods are more suitable than FDR controlling methods. As an example of a possible power improvement, we suggest a modified version of the popular method by Holm, which we also implemented in the R package cherry. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Sintering of viscous droplets under surface tension
NASA Astrophysics Data System (ADS)
Wadsworth, Fabian B.; Vasseur, Jérémie; Llewellin, Edward W.; Schauroth, Jenny; Dobson, Katherine J.; Scheu, Bettina; Dingwell, Donald B.
2016-04-01
We conduct experiments to investigate the sintering of high-viscosity liquid droplets. Free-standing cylinders of spherical glass beads are heated above their glass transition temperature, causing them to densify under surface tension. We determine the evolving volume of the bead pack at high spatial and temporal resolution. We use these data to test a range of existing models. We extend the models to account for the time-dependent droplet viscosity that results from non-isothermal conditions, and to account for non-zero final porosity. We also present a method to account for the initial distribution of radii of the pores interstitial to the liquid spheres, which allows the models to be used with no fitting parameters. We find a good agreement between the models and the data for times less than the capillary relaxation timescale. For longer times, we find an increasing discrepancy between the data and the model as the Darcy outgassing time-scale approaches the sintering timescale. We conclude that the decreasing permeability of the sintering system inhibits late-stage densification. Finally, we determine the residual, trapped gas volume fraction at equilibrium using X-ray computed tomography and compare this with theoretical values for the critical gas volume fraction in systems of overlapping spheres.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
...The Commodity Futures Trading Commission (the ``Commission'') is issuing final rules implementing new statutory provisions enacted by Title VII of the Dodd-Frank Wall Street Reform and Consumer Protection Act (the ``Dodd-Frank Act''). Specifically, the final rule contained herein imposes requirements on swap dealers (``SDs'') and major swap participants (``MSPs'') with respect to the treatment of collateral posted by their counterparties to margin, guarantee, or secure uncleared swaps. Additionally, the final rule includes revisions to ensure that, for purposes of subchapter IV of chapter 7 of the Bankruptcy Code, securities held in a portfolio margining account that is a futures account or a Cleared Swaps Customer Account constitute ``customer property''; and owners of such account constitute ``customers.''
76 FR 10234 - Amendment to the Bank Secrecy Act Regulations-Reports of Foreign Financial Accounts
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
...FinCEN is issuing this final rule to amend the Bank Secrecy Act (BSA) regulations regarding reports of foreign financial accounts. The rule addresses the scope of the persons that are required to file reports of foreign financial accounts. The rule further specifies the types of accounts that are reportable, and provides filing relief in the form of exemptions for certain persons with signature or other authority over foreign financial accounts. Finally, the rule adopts provisions intended to prevent persons subject to the rule from avoiding their reporting requirement.
77 FR 181 - Federal Acquisition Regulation; Federal Acquisition Circular 2005-55; Introduction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... System. VI Updated Financial 2010-005 Chambers. Accounting Standards Board Accounting References. VII... Financial Accounting Standards Board Accounting References (FAR Case 2010-005) This final rule amends the... authoritative accounting standards owing to the Financial Accounting Standards Board's Accounting Standards...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... System. VI Updated Financial 2010-005 Chambers. Accounting Standards Board Accounting References. VII... Financial Accounting Standards Board Accounting References (FAR Case 2010-005) This final rule amends the... authoritative accounting standards owing to the Financial Accounting Standards Board's Accounting Standards...
An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem
NASA Astrophysics Data System (ADS)
Afshar Nadjafi, Behrouz; Shadrokh, Shahram
This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.
Brain Imaging, Forward Inference, and Theories of Reasoning
Heit, Evan
2015-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926
Health care evaluation, utilitarianism and distortionary taxes.
Calcott, P
2000-09-01
Cost Utility Analysis (CUA) and Cost Benefit Analysis (CBA) are methods to evaluate allocations of health care resources. Problems are raised for both methods when income taxes do not meet the first best optimum. This paper explores the implications of three ways that taxes may fall short of this ideal. First, taxes may be distortionary. Second, they may be designed and administered without reference to information that is used by providers of health care. Finally, the share of tax revenue that is devoted to health care may be suboptimal. The two methods are amended to account for these factors.
Brain imaging, forward inference, and theories of reasoning.
Heit, Evan
2014-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.
Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R
2016-03-30
A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Symposium on Dissertations on Chemical Oceanography, March 5-9, 1984. Abstracts.
1984-03-09
polysaccharides ; to determine their chemical structures by the application of various chemical and physical methods; and, finally, to clarity the distri...conducted to determine linkage types of monosaccharide constituents of oligo- and poly- saccharides from seawater samples. The following results were...coastal water. Mono-, oligo- and polysaccharides accounted for 7-9%, lb-26 , and ;1- 43% of the dissolved carbohydrates, respectively. The polysaccharide
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Accounting for partiality in serial crystallography using ray-tracing principles.
Kroon-Batenburg, Loes M J; Schreurs, Antoine M M; Ravelli, Raimond B G; Gros, Piet
2015-09-01
Serial crystallography generates `still' diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a `still' Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R(int) factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R(int) of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-02
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0017] Proposed Information Collection (Annual-Final Report and Account) Activity: Comment Request AGENCY: Veterans Benefits Administration, Department... of certain information by the agency. Under the Paperwork Reduction Act (PRA) of 1995, Federal...
Li, Xiangzhu; Paldus, Josef
2009-09-21
The automerization of cyclobutadiene (CBD) is employed to test the performance of the reduced multireference (RMR) coupled-cluster (CC) method with singles and doubles (RMR CCSD) that employs a modest-size MR CISD wave function as an external source for the most important (primary) triples and quadruples in order to account for the nondynamic correlation effects in the presence of quasidegeneracy, as well as of its perturbatively corrected version accounting for the remaining (secondary) triples [RMR CCSD(T)]. The experimental results are compared with those obtained by the standard CCSD and CCSD(T) methods, by the state universal (SU) MR CCSD and its state selective or state specific (SS) version as formulated by Mukherjee et al. (SS MRCC or MkMRCC) and, wherever available, by the Brillouin-Wigner MRCC [MR BWCCSD(T)] method. Both restricted Hartree-Fock (RHF) and multiconfigurational self-consistent field (MCSCF) molecular orbitals are employed. For a smaller STO-3G basis set we also make a comparison with the exact full configuration interaction (FCI) results. Both fundamental vibrational energies-as obtained via the integral averaging method (IAM) that can handle anomalous potentials and automatically accounts for anharmonicity- and the CBD automerization barrier for the interconversion of the two rectangular structures are considered. It is shown that the RMR CCSD(T) potential has the smallest nonparallelism error relative to the FCI potential and the corresponding fundamental vibrational frequencies compare reasonably well with the experimental ones and are very close to those recently obtained by other authors. The effect of anharmonicity is assessed using the second-order perturbation theory (MP2). Finally, the invariance of the RMR CC methods with respect to orbital rotations is also examined.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-10
... Accounting Standards: Elimination of the Exemption From Cost Accounting Standards for Contracts and...: Office of Management and Budget (OMB), Office of Federal Procurement Policy (OFPP), Cost Accounting... Accounting Standards (CAS) Board, is publishing a final rule to eliminate the exemption from regulations...
EPA announced the availability of the final report, Implications of Climate Change for State Bioassessment Programs and Approaches to Account for Effects. This report uses biological data collected by four states in wadeable rivers and streams to examine the components ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-13
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0017] Agency Information Collection (Annual-Final Report and Account) Activities Under OMB Review AGENCY: Veterans Benefits Administration...), Department of Veterans Affairs, will submit the collection of information abstracted below to the Office of...
75 FR 63823 - Final Guidance, “Federal Greenhouse Gas Accounting and Reporting”
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
..., 2009. The purpose of the Executive Order is to establish an integrated strategy toward sustainability... Federal agencies. Among other provisions, E.O. 13514 requires agencies to measure, report, and reduce.../sustainability/fed-ghg . DATES: The Final Guidance, ``Federal Greenhouse Gas Accounting and Reporting'' is...
Control of electromagnetic stirring by power focusing in large induction crucible furnaces
NASA Astrophysics Data System (ADS)
Frizen, V. E.; Sarapulov, F. N.
2011-12-01
An approach is proposed for the calculation of the operating conditions of an induction crucible furnace at the final stage of melting with the power focused in various regions of melted metal. The calculation is performed using a model based on the method of detailed magnetic equivalent circuits. The combination of the furnace and a thyristor frequency converter is taken into account in modeling.
Sintering of viscous droplets under surface tension
Vasseur, Jérémie; Llewellin, Edward W.; Schauroth, Jenny; Dobson, Katherine J.; Scheu, Bettina; Dingwell, Donald B.
2016-01-01
We conduct experiments to investigate the sintering of high-viscosity liquid droplets. Free-standing cylinders of spherical glass beads are heated above their glass transition temperature, causing them to densify under surface tension. We determine the evolving volume of the bead pack at high spatial and temporal resolution. We use these data to test a range of existing models. We extend the models to account for the time-dependent droplet viscosity that results from non-isothermal conditions, and to account for non-zero final porosity. We also present a method to account for the initial distribution of radii of the pores interstitial to the liquid spheres, which allows the models to be used with no fitting parameters. We find a good agreement between the models and the data for times less than the capillary relaxation timescale. For longer times, we find an increasing discrepancy between the data and the model as the Darcy outgassing time-scale approaches the sintering timescale. We conclude that the decreasing permeability of the sintering system inhibits late-stage densification. Finally, we determine the residual, trapped gas volume fraction at equilibrium using X-ray computed tomography and compare this with theoretical values for the critical gas volume fraction in systems of overlapping spheres. PMID:27274687
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Task Force on Education Funding Equity, Accountability, and Partnerships. Final Report.
ERIC Educational Resources Information Center
Maryland State Dept. of Legislative Services, Annapolis.
In 1997, Maryland formed the Task Force on Education Funding Equity, Accountability, and Partnerships. The group made a comprehensive review of education funding and programs in grades K-12 to ensure that students throughout Maryland have an equal opportunity for academic success. The task force's final report features the membership roster, the…
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1976-01-01
The feasibility of calculating steady mean flow solutions for nonlinear transonic flow over finite wings with a linear theory aerodynamic computer program is studied. The methodology is based on independent solutions for upper and lower surface pressures that are coupled through the external flow fields. Two approaches for coupling the solutions are investigated which include the diaphragm and the edge singularity method. The final method is a combination of both where a line source along the wing leading edge is used to account for blunt nose airfoil effects; and the upper and lower surface flow fields are coupled through a diaphragm in the plane of the wing. An iterative solution is used to arrive at the nonuniform flow solution for both nonlifting and lifting cases. Final results for a swept tapered wing in subcritical flow show that the method converges in three iterations and gives excellent agreement with experiment at alpha = 0 deg and 2 deg. Recommendations are made for development of a procedure for routine application.
78 FR 32099 - Garnishment of Accounts Containing Federal Benefit Payments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... current balance of the account, whichever is lower. DATES: This final rule is effective June 28, 2013. FOR... to the account during the lookback period or the balance of the account on the date of the account... requirement to send a notice if the balance in the account is zero or negative on the date of account review...
Accounting for partiality in serial crystallography using ray-tracing principles
Kroon-Batenburg, Loes M. J.; Schreurs, Antoine M. M.; Ravelli, Raimond B. G.; Gros, Piet
2015-01-01
Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R int factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R int of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography. PMID:26327370
Analyzing Propensity Matched Zero-Inflated Count Outcomes in Observational Studies
DeSantis, Stacia M.; Lazaridis, Christos; Ji, Shuang; Spinale, Francis G.
2013-01-01
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching, and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set. PMID:24298197
48 CFR 9905.505-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... this cost accounting principle does not require that allocation of unallowable costs to final cost.... 9905.505-50 Section 9905.505-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS FOR EDUCATIONAL INSTITUTIONS 9905.505-50 Techniques for...
A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle
2010-03-01
We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... currently somewhere in post-approval, (4) those who have had all their funds disbursed and final accounting is not yet complete, and (5) those who have had all of their funds disbursed and final accounting is... integrated, comprehensive Voice of the Veteran (VOV) measurement program for their lines of business. This...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
... FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Notice of Issuance of Statement of Federal Financial Accounting Standards 41, Deferral of the Effective Date of SFFAS 38, Accounting for Federal Oil and Gas Resources, and Issuance of Final Technical Bulletin 2011-1, Accounting for Federal Natural Resources Other...
Transient liquid phase diffusion bonding of Udimet 720 for Stirling power converter applications
NASA Technical Reports Server (NTRS)
Mittendorf, Donald L.; Baggenstoss, William G.
1992-01-01
Udimet 720 has been selected for use on Stirling power converters for space applications. Because Udimet 720 is generally considered susceptible to strain age cracking if traditional fusion welding is used, other joining methods are being considered. A process for transient liquid phase diffusion bonding of Udimet 720 has been theoretically developed in an effort to eliminate the strain age crack concern. This development has taken into account such variables as final grain size, joint homogenization, joint efficiency related to bonding aid material, bonding aid material application method, and thermal cycle.
Lessing, P.; Messina, C.P.; Fonner, R.F.
1983-01-01
Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.
Pull-out fibers from composite materials at high rate of loading
NASA Technical Reports Server (NTRS)
Amijima, S.; Fujii, T.
1981-01-01
Numerical and experimental results are presented on the pullout phenomenon in composite materials at a high rate of loading. The finite element method was used, taking into account the existence of a virtual shear deformation layer as the interface between fiber and matrix. Experimental results agree well with those obtained by the finite element method. Numerical results show that the interlaminar shear stress is time dependent, in addition, it is shown to depend on the applied load time history. Under step pulse loading, the interlaminar shear stress fluctuates, finally decaying to its value under static loading.
Neurophenomenology revisited: second-person methods for the study of human consciousness
Olivares, Francisco A.; Vargas, Esteban; Fuentes, Claudio; Martínez-Pernía, David; Canales-Johnson, Andrés
2015-01-01
In the study of consciousness, neurophenomenology was originally established as a novel research program attempting to reconcile two apparently irreconcilable methodologies in psychology: qualitative and quantitative methods. Its potential relies on Francisco Varela’s idea of reciprocal constraints, in which first-person accounts and neurophysiological data mutually inform each other. However, since its first conceptualization, neurophenomenology has encountered methodological problems. These problems have emerged mainly because of the difficulty of obtaining and analyzing subjective reports in a systematic manner. However, more recently, several interview techniques for describing subjective accounts have been developed, collectively known as “second-person methods.” Second-person methods refer to interview techniques that solicit both verbal and non-verbal information from participants in order to obtain systematic and detailed subjective reports. Here, we examine the potential for employing second-person methodologies in the neurophenomenological study of consciousness and we propose three practical ideas for developing a second-person neurophenomenological method. Thus, we first describe second-person methodologies available in the literature for analyzing subjective reports, identifying specific constraints on the status of the first-, second- and third- person methods. Second, we analyze two experimental studies that explicitly incorporate second-person methods for traversing the “gap” between phenomenology and neuroscience. Third, we analyze the challenges that second-person accounts face in establishing an objective methodology for comparing results across different participants and interviewers: this is the “validation” problem. Finally, we synthesize the common aspects of the interview methods described above. In conclusion, our arguments emphasize that second-person methods represent a powerful approach for closing the gap between the experiential and the neurobiological levels of description in the study of human consciousness. PMID:26074839
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... conditions for sale accounting treatment under generally accepted accounting principles (``GAAP''). The rule... securitization participants. Modifications to GAAP Accounting Standards On June 12, 2009, the Financial Accounting Standards Board (``FASB'') finalized modifications to GAAP through Statement of Financial...
12 CFR 19.245 - Notice of removal, suspension or debarment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services... final order for removal, suspension, or debarment of an independent public accountant or accounting firm... notice of the order to the other Federal banking agencies. (b) Notice to the Comptroller by accountants...
12 CFR 19.245 - Notice of removal, suspension or debarment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services... final order for removal, suspension, or debarment of an independent public accountant or accounting firm... notice of the order to the other Federal banking agencies. (b) Notice to the Comptroller by accountants...
12 CFR 19.245 - Notice of removal, suspension or debarment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services... final order for removal, suspension, or debarment of an independent public accountant or accounting firm... notice of the order to the other Federal banking agencies. (b) Notice to the Comptroller by accountants...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FEDERAL RESERVE SYSTEM RULES OF PRACTICE FOR HEARINGS Removal, Suspension, and Debarment of Accountants... independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
12 CFR 19.245 - Notice of removal, suspension or debarment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services... final order for removal, suspension, or debarment of an independent public accountant or accounting firm... notice of the order to the other Federal banking agencies. (b) Notice to the Comptroller by accountants...
Karagiannidis, Avraam; Xirogiannopoulou, Anna; Tchobanoglous, George
2008-12-01
In the present paper, implementation scenarios of a Pay-As-You-Throw program were developed and analyzed for the first time in Greece. Firstly, the necessary steps for implementing a Pay-As-You-Throw program were determined. A database was developed for the needs of the full cost accounting method, where all financial and waste-production data were inserted, in order to calculate the unit price of charging for four different implementation scenarios of the "polluter-pays" principle. For each scenario, the input in waste management cost was estimated, as well as the total waste charges for households. Finally, a comparative analysis of the results was performed.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-02
... 8 120 0.5 60 Tax Opinion Attorney for Sponsor........ 15 8 120 4 480 Transfer Affidavit Attorney for... Final Data Statements (attached to closing Accountant for Sponsor...... 15 8 120 32 3,840 letter). Accountants' Closing Letter Accountant 15 8 120 8 960 Accountants' OCS Letter Accountant 15 8 120 8 960...
Kim, Jae Hwan Eric; Chrostowski, Lukas; Bisaillon, Eric; Plant, David V
2007-08-06
We demonstrate a Finite-Difference Time-Domain (FDTD) phase methodology to estimate resonant wavelengths in Fabry-Perot (FP) cavity structures. We validate the phase method in a conventional Vertical-Cavity Surface-Emitting Laser (VCSEL) structure using a transfer-matrix method, and compare results with a FDTD reflectance method. We extend this approach to a Sub-Wavelength Grating (SWG) and a Photonic Crystal (Phc) slab, either of which may replace one of the Distributed Bragg Reflectors (DBRs) in the VCSEL, and predict resonant conditions with varying lithographic parameters. Finally, we compare the resonant tunabilities of three different VCSEL structures, taking quality factors into account.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... Cost Accounting Standards: Change to the CAS Applicability Threshold for the Inflation Adjustment to... Federal Procurement Policy, Cost Accounting Standards Board. ACTION: Final rule. SUMMARY: The Office of Federal Procurement Policy (OFPP), Cost Accounting Standards (CAS) Board (Board), has adopted, without...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
Magnetization of InAs parabolic quantum dot: An exact diagonalization approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aswathy, K. M., E-mail: aswathykm20@gmail.com; Sanjeev Kumar, D.
2016-04-13
The magnetization of two electron InAs quantum dot has been studied as a function of magnetic field. The electron-electron interaction has been taken into account by using exact diagonalization method numerically. The magnetization at zero external magnetic field is zero and increases in the negative direction. There is also a paramagnetic peak where the energy levels cross from singlet state to triplet state. Finally, the magnetization falls again to even negative values and saturates.
Ethnography in qualitative educational research: AMEE Guide No. 80.
Reeves, Scott; Peller, Jennifer; Goldman, Joanne; Kitto, Simon
2013-08-01
Ethnography is a type of qualitative research that gathers observations, interviews and documentary data to produce detailed and comprehensive accounts of different social phenomena. The use of ethnographic research in medical education has produced a number of insightful accounts into its role, functions and difficulties in the preparation of medical students for clinical practice. This AMEE Guide offers an introduction to ethnography - its history, its differing forms, its role in medical education and its practical application. Specifically, the Guide initially outlines the main characteristics of ethnography: describing its origins, outlining its varying forms and discussing its use of theory. It also explores the role, contribution and limitations of ethnographic work undertaken in a medical education context. In addition, the Guide goes on to offer a range of ideas, methods, tools and techniques needed to undertake an ethnographic study. In doing so it discusses its conceptual, methodological, ethical and practice challenges (e.g. demands of recording the complexity of social action, the unpredictability of data collection activities). Finally, the Guide provides a series of final thoughts and ideas for future engagement with ethnography in medical education. This Guide is aimed for those interested in understanding ethnography to develop their evaluative skills when reading such work. It is also aimed at those interested in considering the use of ethnographic methods in their own research work.
Earnshaw, Valerie A.; Jin, Harry; Wickersham, Jeffrey; Kamarulzaman, Adeeba; John, Jacob; Altice, Frederick L.
2015-01-01
OBJECTIVES Stigma towards people living with HIV/AIDS (PLWHA) is strong in Malaysia. Although stigma has been understudied, it may be a barrier to treating the approximately 81 000 Malaysian PLWHA. The current study explores correlates of intentions to discriminate against PLWHA among medical and dental students, the future healthcare providers of Malaysia. METHODS An online, cross-sectional survey of 1296 medical and dental students was conducted in 2012 at seven Malaysian universities; 1165 (89.9%) completed the survey and were analysed. Sociodemographic characteristics, stigma-related constructs and intentions to discriminate against PLWHA were measured. Linear mixed models were conducted, controlling for clustering by university. RESULTS The final multivariate model demonstrated that students who intended to discriminate more against PLWHA were female, less advanced in their training, and studying dentistry. They further endorsed more negative attitudes towards PLWHA, internalised greater HIV-related shame, reported more HIV-related fear and disagreed more strongly that PLWHA deserve good care. The final model accounted for 38% of the variance in discrimination intent, with 10% accounted for by sociodemographic characteristics and 28% accounted for by stigma-related constructs. CONCLUSIONS It is critical to reduce stigma among medical and dental students to eliminate intentions to discriminate and achieve equitable care for Malaysian PLWHA. Stigma-reduction interventions should be multipronged, addressing attitudes, internalised shame, fear and perceptions of deservingness of care. PMID:24666546
ERIC Educational Resources Information Center
Holland, Leigh
2004-01-01
This paper investigates how one course--a final year undergraduate module--has been developed and implemented to inform students about corporate social responsibility from an accounting perspective. It takes as its core the notion of accounting and accountability, and is delivered by accounting lecturers to business students following a range of…
NASA Astrophysics Data System (ADS)
Moissinac, Henri; Maitre, Henri; Bloch, Isabelle
1995-11-01
An image interpretation method is presented for the automatic processing of aerial pictures of a urban landscape. In order to improve the picture analysis, some a priori knowledge extracted from a geographic map is introduced. A coherent graph-based model of the city is built, starting with the road network. A global uncertainty management scheme has been designed in order to evaluate the final confidence we can have in the final results. This model and the uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels. The symbolic relationships linking the different kinds of elements are taken into account while propagating and combining the confidence measures along the interpretation process.
Assessment Methods of an Undergraduate Psychiatry Course at a Saudi University
Amr, Mostafa; Amin, Tarek
2012-01-01
Objectives: In Arab countries there are few studies on assessment methods in the field of psychiatry. The objective of this study was to assess the outcome of different forms of psychiatric course assessment among fifth year medical students at King Faisal University, Saudi Arabia. Methods: We examined the performance of 110 fifth-year medical students through objective structured clinical examinations (OSCE), traditional oral clinical examinations (TOCE), portfolios, multiple choice questions (MCQ), and a written examination. Results: The score ranges in TOCE, OSCE, portfolio, and MCQ were 32–50, 7–15, 5–10 and 22–45, respectively. In regression analysis, there was a significant correlation between OSCE and all forms of psychiatry examinations, except for the MCQ marks. OSCE accounted for 65.1% of the variance in total clinical marks and 31.5% of the final marks (P = 0.001), while TOCE alone accounted for 74.5% of the variance in the clinical scores. Conclusions: This study demonstrates a consistency among the students’ assessment methods used in the psychiatry course, particularly the clinical component, in an integrated manner. This information would be useful for future developments in undergraduate teaching. PMID:22548141
49 CFR 520.28 - Preparation of final environmental impact statements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Preparation of final environmental impact... ENVIRONMENTAL IMPACTS Procedures § 520.28 Preparation of final environmental impact statements. (a) If the... for the action shall prepare a final environmental impact statement (FEIS), taking into account all...
78 FR 18795 - Truth in Lending (Regulation Z)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... applies prior to account opening and during the first year after account opening. This final rule amends Regulation Z to apply the limitation only during the first year after account opening. DATES: This rule is... during the first year after account opening. Id. This rule became effective on February 22, 2010. On...
Articulation Activity for Accounting Programs: Project Results and Descriptive Report. Final Report.
ERIC Educational Resources Information Center
Adams, Esther
In response to the need for a basic articulated accounting curriculum providing for a smooth transition from the secondary to the postsecondary level, Blackhawk Technical Institute (BTI) conducted a project to develop a master list of accounting competencies as the basis of a core accounting curriculum; to determine competency standards; to…
Integrating Computers into the Accounting Curriculum Using an IBM PC Network. Final Report.
ERIC Educational Resources Information Center
Shaoul, Jean
Noting the increased use of microcomputers in commerce and the accounting profession, the Department of Accounting and Finance at the University of Manchester recognized the importance of integrating microcomputers into the accounting curriculum and requested and received a grant to develop an integrated study environment in which students would…
Morin, Fanny; Courtecuisse, Hadrien; Reinertsen, Ingerid; Le Lann, Florian; Palombi, Olivier; Payan, Yohan; Chabanas, Matthieu
2017-08-01
During brain tumor surgery, planning and guidance are based on preoperative images which do not account for brain-shift. However, this deformation is a major source of error in image-guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint-based biomechanical simulation method to compensate for craniotomy-induced brain-shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Prior to surgery, a patient-specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B-mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint, respectively. A constraint-based simulation is then executed to register the pre- and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided, first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain-shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. In this paper, a new constraint-based biomechanical simulation method is proposed to compensate for craniotomy-induced brain-shift. While being efficient to correct this deformation, the method is fully integrable in a clinical process. Copyright © 2017 Elsevier B.V. All rights reserved.
A novel description of FDG excretion in the renal system: application to metformin-treated models
NASA Astrophysics Data System (ADS)
Garbarino, S.; Caviglia, G.; Sambuceti, G.; Benvenuto, F.; Piana, M.
2014-05-01
This paper introduces a novel compartmental model describing the excretion of 18F-fluoro-deoxyglucose (FDG) in the renal system and a numerical method based on the maximum likelihood for its reduction. This approach accounts for variations in FDG concentration due to water re-absorption in renal tubules and the increase of the bladder’s volume during the FDG excretion process. From the computational viewpoint, the reconstruction of the tracer kinetic parameters is obtained by solving the maximum likelihood problem iteratively, using a non-stationary, steepest descent approach that explicitly accounts for the Poisson nature of nuclear medicine data. The reliability of the method is validated against two sets of synthetic data realized according to realistic conditions. Finally we applied this model to describe FDG excretion in the case of animal models treated with metformin. In particular we show that our approach allows the quantitative estimation of the reduction of FDG de-phosphorylation induced by metformin.
Political incentives towards replacing animal testing in nanotechnology?
Sauer, Ursula G
2009-01-01
The Treaty of Lisbon requests the European Union and the Member States to pay full regard to animal welfare issues when implementing new policies. The present article discusses how these provisions are met in the emerging area of nanotechnology. Political action plans in Europe take into account animal welfare issues to some extent. Funding programmes promote the development of non-animal test methods, however only in the area of nanotoxicology and also here not sufficiently to "pay full regard" to preventing animal testing, let alone to bring about a paradigm change in toxicology or in biomedical research as such. Ethical deliberations on nanotechnology, which influence future policies, so far do not address animal welfare at all. Considering that risk assessment of nanoproducts is conceived as a key element to protect human dignity, ethical deliberations should address the choice of the underlying testing methods and call for basing nanomaterial safety testing upon the latest scientific--and ethically acceptable--technologies. Finally, public involvement in the debate on nanotechnology should take into account information on resulting animal experiments.
Characterizing the response of a scintillator-based detector to single electrons.
Sang, Xiahan; LeBeau, James M
2016-02-01
Here we report the response of a high angle annular dark field scintillator-based detector to single electrons. We demonstrate that care must be taken when determining the single electron intensity as significant discrepancies can occur when quantifying STEM images with different methods. To account for the detector response, we first image the detector using very low beam currents (∼8fA), and subsequently model the interval between consecutive single electrons events. We find that single electrons striking the detector present a wide distribution of intensities, which we show is not described by a simple function. Further, we present a method to accurately account for the electrons within the incident probe when conducting quantitative imaging. The role detector settings play on determining the single electron intensity is also explored. Finally, we extend our analysis to describe the response of the detector to multiple electron events within the dwell interval of each pixel. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating linear effects in ANOVA designs: the easy way.
Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana
2012-09-01
Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.
Numerical analysis of laser ablation using the axisymmetric two-temperature model
NASA Astrophysics Data System (ADS)
Dziatkiewicz, Jolanta; Majchrzak, Ewa
2018-01-01
Laser ablation of the axisymmetric micro-domain is analyzed. To describe the thermal processes occurring in the micro-domain the two-temperature hyperbolic model supplemented by the boundary and initial conditions is used. This model takes into account the phase changes of material (solid-liquid and liquid-vapour) and the ablation process. At the stage of numerical computations the finite difference method with staggered grid is used. In the final part the results of computations are shown.
Numerical Simulation of Particle Motion in a Curved Channel
NASA Astrophysics Data System (ADS)
Liu, Yi; Nie, Deming
2018-01-01
In this work the lattice Boltzmann method (LBM) is used to numerically study the motion of a circular particle in a curved channel at intermediate Reynolds numbers (Re). The effects of the Reynolds number and the initial particle position are taken into account. Numerical results include the streamlines, particle trajectories and final equilibrium positions. It has been found that the particle is likely to migrate to a similar equilibrium position irrespective of its initial position when Re is large.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
1992-02-16
3 0 B. Cost Accounting Standard 418 ..................................................... 3 1 1. D efinitio n s ...objective" as an activity for which a separate measurement of cost is desired. C. Horngren , Cost Accounting . A Managerial Emphasis 21 (5th ed. 1982...Segments and Business Unit General and Administrative Expenses to Final Cost Objectives 6. AUTHOR( S ) Stephen Thomas Lynch, Major 7. PERFORMING
Sánchez, Ariel G.; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; ...
2016-09-30
The cosmological information contained in anisotropic galaxy clustering measurements can often be compressed into a small number of parameters whose posterior distribution is well described by a Gaussian. Here, we present a general methodology to combine these estimates into a single set of consensus constraints that encode the total information of the individual measurements, taking into account the full covariance between the different methods. We also illustrate this technique by applying it to combine the results obtained from different clustering analyses, including measurements of the signature of baryon acoustic oscillations and redshift-space distortions, based on a set of mock cataloguesmore » of the final SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). Our results show that the region of the parameter space allowed by the consensus constraints is smaller than that of the individual methods, highlighting the importance of performing multiple analyses on galaxy surveys even when the measurements are highly correlated. Our paper is part of a set that analyses the final galaxy clustering data set from BOSS. The methodology presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.« less
Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints
NASA Astrophysics Data System (ADS)
Prince, E. R.; Carr, R. W.; Cobb, R. G.
This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... FEDERAL DEPOSIT INSURANCE CORPORATION 12 CFR Part 330 RIN 3064-AD37 Deposit Insurance Regulations; Unlimited Coverage for Noninterest-Bearing Transaction Accounts; Inclusion of Interest on Lawyers Trust Accounts AGENCY: Federal Deposit Insurance Corporation (FDIC). ACTION: Final rule. [[Page 4814
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olander, Jonathan; Myers, Corey
2013-07-01
Studsviks' Processing Facility Erwin (SPFE) has been treating Low-Level Radioactive Waste using its patented THOR process for over 13 years. Studsvik has been mixing and processing wastes of the same waste classification but different chemical and isotopic characteristics for the full extent of this period as a general matter of operations. Studsvik utilizes the accountability method to track the movement of radionuclides from acceptance of waste, through processing, and finally in the classification of waste for disposal. Recently the NRC has proposed to revise the 1995 Branch Technical Position on Concentration Averaging and Encapsulation (1995 BTP on CA) with additionalmore » clarification (draft BTP on CA). The draft BTP on CA has paved the way for large scale blending of higher activity and lower activity waste to produce a single waste for the purpose of classification. With the onset of blending in the waste treatment industry, there is concern from the public and state regulators as to the robustness of the accountability method and the ability of processors to prevent the inclusion of hot spots in waste. To address these concerns and verify the accountability method as applied by the SPFE, as well as the SPFE's ability to control waste package classification, testing of actual waste packages was performed. Testing consisted of a comprehensive dose rate survey of a container of processed waste. Separately, the waste package was modeled chemically and radiologically. Comparing the observed and theoretical data demonstrated that actual dose rates were lower than, but consistent with, modeled dose rates. Moreover, the distribution of radioactivity confirms that the SPFE can produce a radiologically homogeneous waste form. The results of the study demonstrate: 1) the accountability method as applied by the SPFE is valid and produces expected results; 2) the SPFE can produce a radiologically homogeneous waste; and 3) the SPFE can effectively control the waste package classification. (authors)« less
Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method
NASA Technical Reports Server (NTRS)
Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.
1990-01-01
A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.
Calcium phosphate-based coatings on titanium and its alloys.
Narayanan, R; Seshadri, S K; Kwon, T Y; Kim, K H
2008-04-01
Use of titanium as biomaterial is possible because of its very favorable biocompatibility with living tissue. Titanium implants having calcium phosphate coatings on their surface show good fixation to the bone. This review covers briefly the requirements of typical biomaterials and narrowly focuses on the works on titanium. Calcium phosphate ceramics for use in implants are introduced and various methods of producing calcium phosphate coating on titanium substrates are elaborated. Advantages and disadvantages of each type of coating from the view point of process simplicity, cost-effectiveness, stability of the coatings, coating integration with the bone, cell behavior, and so forth are highlighted. Taking into account all these factors, the efficient method(s) of producing these coatings are indicated finally.
Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E
2017-01-01
Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.
76 FR 4348 - Sunshine Act Notices; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... of Minutes for December 16, 2010 Proposed Final Audit Report on the Tennessee Democratic Party Proposed Final Audit Report on the Tennessee Republican Party Federal Election Account Proposed Final Audit... Independent Expenditures and Electioneering Communications by Corporations and Labor Organizations Management...
Accounting for partiality in serial crystallography using ray-tracing principles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon-Batenburg, Loes M. J., E-mail: l.m.j.kroon-batenburg@uu.nl; Schreurs, Antoine M. M.; Ravelli, Raimond B. G.
Serial crystallography generates partial reflections from still diffraction images. Partialities are estimated with EVAL ray-tracing simulations, thereby improving merged reflection data to a similar quality as conventional rotation data. Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialitiesmore » based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R{sub int} factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R{sub int} of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.« less
MCDF calculations of Auger cascade processes
NASA Astrophysics Data System (ADS)
Beerwerth, Randolf; Fritzsche, Stephan
2017-10-01
We model the multiple ionization of near-neutral core-excited atoms where a cascade of Auger processes leads to the emission of several electrons. We utilize the multiconfiguration Dirac-Fock (MCDF) method to generate approximate wave functions for all fine-structure levels and to account for all decays between them. This approach allows to compute electron spectra, the population of final-states and ion yields, that are accessible in many experiments. Furthermore, our approach is based on the configuration interaction method. A careful treatment of correlation between electronic configurations enables one to model three-electron processes such as an Auger decay that is accompanied by an additional shake-up transition. Here, this model is applied to the triple ionization of atomic cadmium, where we show that the decay of inner-shell 4p holes to triply-charged final states is purely due to the shake-up transition of valence 5s electrons. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, Grzegorz Karwasz.
76 FR 67801 - Medicare Program; Medicare Shared Savings Program: Accountable Care Organizations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-02
... Community Care Network NP Nurse Practitioner NPI National Provider Identifier NQF National Quality Forum OIG...: Accountable Care Organizations; Final Rule #0;#0;Federal Register / Vol. 76 , No. 212 / Wednesday, November 2... Savings Program: Accountable Care Organizations AGENCY: Centers for Medicare & Medicaid Services (CMS...
46 CFR 280.6 - Calendar year accounting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 8 2010-10-01 2010-10-01 false Calendar year accounting. 280.6 Section 280.6 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REGULATIONS AFFECTING SUBSIDIZED VESSELS AND... Calendar year accounting. Except as provided in § 280.9 (relating to the final year of an ODS agreement...
Defining Responsibility in Maintaining Financial Accounting Systems.
1995-01-01
acquisition and issuance of materials, original cost , location, etc., KAR 2 (Property and Inventory Accounting )." (3:10) The final methodology used in...PROFESSIONAL MILITARY COMPTROLLER SCHOOL IDEA PAPER TITLE DEFINING RESPONSIBILITY IN MAINTAINING FINANCIAL ACCOUNTING SYSTEMS AUTHOR . :.a EFANIE B...or the Department of the Air Force. DIC tiJ.[ In.,,- B . . . .. . PMCS IDEA PAPER TITLE: Defining Responsibility in Maintaining Financial Accounting
ERIC Educational Resources Information Center
Mbawuni, Joseph; Nimako, Simon Gyasi
2015-01-01
This study principally investigates job-related and personality factors that determine Ghanaian accounting students' intentions to pursue careers in accounting. It draws on a rich body of existing literature to develop a research model. Primary data were collected from a cross-sectional survey of 516 final year accounting students in a Ghanaian…
[Ecosystem services valuation of Qinghai Lake].
Jiang, Bo; Zhang, Lu; Ouyang, Zhi-yun
2015-10-01
Qinghai Lake is the largest inland and salt water lake in China, and provides important ecosystem services to beneficiaries. Economic valuation of wetland ecosystem services from Qinghai Lake can reveal the direct contribution of lake ecosystems to beneficiaries using economic data, which can advance the incorporation of wetland protection of Qinghai Lake into economic tradeoffs and decision analyses. In this paper, we established a final ecosystem services valuation system based on the underlying ecological mechanisms and regional socio-economic conditions. We then evaluated the eco-economic value provided by the wetlands at Qinghai Lake to beneficiaries in 2012 using the market value method, replacement cost method, zonal travel cost method, and contingent valuation method. According to the valuation result, the total economic values of the final ecosystem services provided by the wetlands at Qinghai Lake were estimated to be 6749.08 x 10(8) yuan RMB in 2012, among which the value of water storage service and climate regulation service were 4797.57 x 10(8) and 1929.34 x 10(8) yuan RMB, accounting for 71.1% and 28.6% of the total value, respectively. The economic value of the 8 final ecosystem services was ranked from greatest to lowest as: water storage service > climate regulation service > recreation and tourism service > non-use value > oxygen release service > raw material production service > carbon sequestration service > food production service. The evaluation result of this paper reflects the substantial value that the wetlands of Qinghai Lake provide to beneficiaries using monetary values, which has the potential to help increase wetland protection awareness among the public and decision-makers, and inform managers about ways to create ecological compensation incentives. The final ecosystem service evaluation system presented in this paper will offer guidance on separating intermediate services and final services, and establishing monitoring programs for dynamic ecosystem services valuation with the aim of helping improve management outcomes.
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
The role of prominence in determining the scope of boundary-related lengthening in Greek.
Katsika, Argyro
2016-03-01
This study aims at examining and accounting for the scope of the temporal effect of phrase boundaries. Previous research has indicated that there is an interaction between boundary-related lengthening and prominence such that the former extends towards the nearby prominent syllable. However, it is unclear whether this interaction is due to lexical stress and/or phrasal prominence (marked by pitch accent) and how far towards the prominent syllable the effect extends. Here, we use an electromagnetic articulography (EMA) study of Greek to examine the scope of boundary-related lengthening as a function of lexical stress and pitch accent separately. Boundaries are elicited by the means of a variety of syntactic constructions.. The results show an effect of lexical stress. Phrase-final lengthening affects the articulatory gestures of the phrase-final syllable that are immediately adjacent to the boundary in words with final stress, but is initiated earlier within phrase-final words with non-final stress. Similarly, the articulatory configurations during inter-phrasal pauses reach their point of achievement later in words with final stress than in words with non-final stress. These effects of stress hold regardless of whether the phrase-final word is accented or de-accented. Phrase-initial lengthening, on the other hand, is consistently detected on the phrase-initial constriction, independently of where the stress is within the preceding, phrase-final, word. These results indicate that the lexical aspect of prominence plays a role in determining the scope of boundary-related lengthening in Greek. Based on these results, a gestural account of prosodic boundaries in Greek is proposed in which lexical and phrasal prosody interact in a systematic and coordinated fashion. The cross-linguistic dimensions of this account and its implications for prosodic structure are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
... be affected by recent changes to generally accepted accounting principles. In effect, the Final Rule... complied with the preexisting requirements under generally accepted accounting principles in effect prior... accounting principles (``GAAP''). The rule was a clarification, rather than a limitation, of the repudiation...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
... accounting principles (``GAAP''). The rule was a clarification, rather than a limitation, of the repudiation... has created uncertainty for securitization participants. On June 12, 2009, the Financial Accounting Standards Board (``FASB'') finalized modifications to GAAP through Statement of Financial Accounting...
78 FR 43843 - Clarification of Appeal Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... definition of an Order, the timing of appeals of orders to perform restructured accounting, and the finality... appeal an order to perform a restructured accounting involving only Federal oil and gas leases under the...). Generally, under the proposed rule, you would appeal an Order to Perform a Restructured Accounting to the...
77 FR 27545 - Federal Acquisition Regulation; Federal Acquisition Circular 2005-59; Introduction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-10
... Revision of Cost 2012-003 Chambers. Accounting Standards Threshold. SUPPLEMENTARY INFORMATION: Summaries... substantial number of small entities. Item III--Revision of Cost Accounting Standards Threshold (FAR Case 2012-003) This final rule revises the cost accounting standards (CAS) threshold in order to implement in...
77 FR 45539 - Great Lakes Pilotage Rates-2013 Annual Review and Adjustment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... financial information, and we contract with independent accountants to assist in that review. We have now completed our review of the independent accountant's 2010 financial reports. The comments by the pilot associations on those reports and the independent accountant's final findings are discussed in our document...
12 CFR 308.604 - Notice of removal, suspension, or debarment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... issuance of a final order for removal, suspension, or debarment of an independent public accountant or..., whichever date is earlier. The written notice must be filed by the independent public accountant or... PRACTICE RULES OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing...
Course of Study for Secondary Level Bookkeeping/Accounting. Final Report.
ERIC Educational Resources Information Center
Brower, Edward B.
The present project was designed to continue the preparation of a course of study useful for developing secondary level bookkeeping/accounting instruction. The course of study is intended to (1) derive vocational instruction for students with varying career goals, (2) develop accounting-oriented career exploration units for Introduction to…
76 FR 14541 - Federal Acquisition Regulation; Federal Acquisition Circular 2005-50; Introduction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
... measurement of pension cost,'' and 415 ``Accounting for the cost of deferred compensation.'' Formerly, the... Chambers. Consistency of Cost Accounting Practices for Contracts Awarded to Foreign Concerns. IX... of Cost Accounting Practices for Contracts Awarded to Foreign Concerns (FAR Case 2009-025) This final...
Accounting Cluster Demonstration Program at Aloha High School. Final Report.
ERIC Educational Resources Information Center
Beaverton School District 48, OR.
A model high school accounting cluster program was planned, developed, implemented, and evaluated in the Beaverton, Oregon, school district. The curriculum was developed with the help of representatives from the accounting occupations in the Portland metropolitan area. Through management interviews, identification of on-the job requirements, and…
12 CFR 308.604 - Notice of removal, suspension, or debarment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PRACTICE RULES OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing... issuance of a final order for removal, suspension, or debarment of an independent public accountant or... notice of the order to the other Federal banking agencies. (b) Notice to the FDIC by accountants and...
12 CFR 308.604 - Notice of removal, suspension, or debarment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRACTICE RULES OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing... issuance of a final order for removal, suspension, or debarment of an independent public accountant or... notice of the order to the other Federal banking agencies. (b) Notice to the FDIC by accountants and...
12 CFR 308.604 - Notice of removal, suspension, or debarment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PRACTICE RULES OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing... issuance of a final order for removal, suspension, or debarment of an independent public accountant or... notice of the order to the other Federal banking agencies. (b) Notice to the FDIC by accountants and...
12 CFR 308.604 - Notice of removal, suspension, or debarment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PRACTICE RULES OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing... issuance of a final order for removal, suspension, or debarment of an independent public accountant or... notice of the order to the other Federal banking agencies. (b) Notice to the FDIC by accountants and...
75 FR 75676 - Sunshine Act Notices
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-06
...). Status: This meeting will be open to the public. Items To Be Discussed: Proposed Final Audit Report on Biden for President, Inc. Proposed Final Audit Report on the Washington State Democratic Central Committee. Proposed Final Audit Report on the Tennessee Republican Party Federal Election Account. Proposed...
Mantone, Joseph
2005-01-31
With the trial of Richard Scrushy, left, finally under way in Alabama, it isn't just the former HealthSouth executive being scrutinized. It's the first courtroom test for the Sarbanes-Oxley Act of 2002, which holds CEOs accountable for false financial statements. Scrushy's defense attorneys have already begun laying the groundwork to blame underlings for the 2.64 billion dollar accounting fraud.
Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.
Boosting Higgs pair production in the [Formula: see text] final state with multivariate techniques.
Behr, J Katharina; Bortoletto, Daniela; Frost, James A; Hartland, Nathan P; Issever, Cigdem; Rojo, Juan
2016-01-01
The measurement of Higgs pair production will be a cornerstone of the LHC program in the coming years. Double Higgs production provides a crucial window upon the mechanism of electroweak symmetry breaking and has a unique sensitivity to the Higgs trilinear coupling. We study the feasibility of a measurement of Higgs pair production in the [Formula: see text] final state at the LHC. Our analysis is based on a combination of traditional cut-based methods with state-of-the-art multivariate techniques. We account for all relevant backgrounds, including the contributions from light and charm jet mis-identification, which are ultimately comparable in size to the irreducible 4 b QCD background. We demonstrate the robustness of our analysis strategy in a high pileup environment. For an integrated luminosity of [Formula: see text] ab[Formula: see text], a signal significance of [Formula: see text] is obtained, indicating that the [Formula: see text] final state alone could allow for the observation of double Higgs production at the High Luminosity LHC.
Differences between time domain and Fourier domain optical coherence tomography in imaging tissues.
Gao, W; Wu, X
2017-11-01
It has been numerously demonstrated that both time domain and Fourier domain optical coherence tomography (OCT) can generate high-resolution depth-resolved images of living tissues and cells. In this work, we compare the common points and differences between two methods when the continuous and random properties of live tissue are taken into account. It is found that when relationships that exist between the scattered light and tissue structures are taken into account, spectral interference measurements in Fourier domain OCT (FDOCT) is more advantageous than interference fringe envelope measurements in time domain OCT (TDOCT) in the cases where continuous property of tissue is taken into account. It is also demonstrated that when random property of tissue is taken into account FDOCT measures the Fourier transform of the spatial correlation function of the refractive index and speckle phenomena will limit the effective limiting imaging resolution in both TDOCT and FDOCT. Finally, the effective limiting resolution of both TDOCT and FDOCT are given which can be used to estimate the effective limiting resolution in various practical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Evaluation of design ventilation requirements for enclosed parking facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayari, A.; Krarti, M.
2000-07-01
This paper proposes a new design approach to determine the ventilation requirements for enclosed parking garages. The design approach accounts for various factors that affect the indoor air quality within a parking facility, including the average CO emission rate, the average travel time, the number of cars, and the acceptable CO level within the parking garage. This paper first describes the results of a parametric analysis based on the design method that was developed. Then the design method is presented to explain how the ventilation flow rate can be determined for any enclosed parking facility. Finally, some suggestions are proposedmore » to save fan energy for ventilating parking garages using demand ventilation control strategies.« less
Model for determining vapor equilibrium rates in the hanging drop method for protein crystal growth
NASA Technical Reports Server (NTRS)
Baird, James K.; Frieden, Richard W.; Meehan, E. J., Jr.; Twigg, Pamela J.; Howard, Sandra B.; Fowlis, William A.
1987-01-01
An engineering analysis of the rate of evaporation of solvent in the hanging drop method of protein crystal growth is presented. Results are applied to 18 drop and well arrangements commonly encountered in the laboratory. The chemical nature of the salt, drop size and shape, drop concentration, well size, well concentration, and temperature are taken into account. The rate of evaporation increases with temperature, drop size, and the salt concentration difference between the drop and the well. The evaporation in this model possesses no unique half-life. Once the salt in the drop achieves 80 percent of its final concentration, further evaporation suffers from the law of diminishing returns.
NASA Astrophysics Data System (ADS)
Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan
2016-04-01
The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final method of simulation and C-factor determination for the gathered data will be discussed in more detail. The presentation was supported by research projects QJ1530181 and SGS14/180/OHK1/3T/11.
ERIC Educational Resources Information Center
Lane Community Coll., Eugene, OR.
A final report and final evaluation report of Phase III are provided for a project to establish a national clearinghouse for apprenticeship-related instructional materials. The final report provides a summary and a narrative account of these project activities: identification of materials; identification of apprenticeship curriculum needs;…
NASA Astrophysics Data System (ADS)
Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang
2017-07-01
Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.
Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, Colin; Vassilevski, Panayot S.
2016-02-18
We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. Wemore » present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-10
.... Colombia. III Revision of Cost 2012-003 Chambers. Accounting Standards Threshold. SUPPLEMENTARY INFORMATION... economic impact on a substantial number of small entities. Item III--Revision of Cost Accounting Standards Threshold (FAR Case 2012-003) This final rule revises the cost accounting standards (CAS) threshold in order...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
... 1690 Roth Feature to the Thrift Savings Plan and Miscellaneous Uniformed Services Account Amendments... Plan. This final rule also reorganizes regulatory provisions pertaining to uniformed services accounts. DATES: This rule is effective May 7, 2012. FOR FURTHER INFORMATION CONTACT: Laurissa Stokes at (202) 942...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
... Standards (CAS) Board standards 412 ``Cost Accounting Standard for composition and measurement of pension... Disclosure and 2009-025 Chambers. Consistency of Cost Accounting Practices for Contracts Awarded to Foreign... Accounting Practices for Contracts Awarded to Foreign Concerns (FAR Case 2009-025) This final rule adopts...
64 Years Later, Korean War Vet Finally Comes Home | DoDLive
; Bruce Erickson said. It wasn't. According to the Defense POW/MIA Accounting Agency, Gene is one of 128 /MIA Accounting Command and Central Identification Laboratory showed Clayton the documents proving his Families, Rotator and tagged Brainerd, Clayton Erickson, Department of POW/MIA Accounting Agency, DNA
ERIC Educational Resources Information Center
Kehoe, Margaret; Stoel-Gammon, Carol
1997-01-01
Examines different approaches to prosodic acquisition: Gerken's S(W) production template; Fikkert's and Archibald's theories of stress acquisition and Demuth and Fee's prosodic hierarchy account. Results reveal that current approaches cannot account for findings in the data such as the increased preservation of final over nonfinal unstressed…
Computer Aided Drug Design: Success and Limitations.
Baig, Mohammad Hassan; Ahmad, Khurshid; Roy, Sudeep; Ashraf, Jalaluddin Mohammad; Adil, Mohd; Siddiqui, Mohammad Haris; Khan, Saif; Kamal, Mohammad Amjad; Provazník, Ivo; Choi, Inho
2016-01-01
Over the last few decades, computer-aided drug design has emerged as a powerful technique playing a crucial role in the development of new drug molecules. Structure-based drug design and ligand-based drug design are two methods commonly used in computer-aided drug design. In this article, we discuss the theory behind both methods, as well as their successful applications and limitations. To accomplish this, we reviewed structure based and ligand based virtual screening processes. Molecular dynamics simulation, which has become one of the most influential tool for prediction of the conformation of small molecules and changes in their conformation within the biological target, has also been taken into account. Finally, we discuss the principles and concepts of molecular docking, pharmacophores and other methods used in computer-aided drug design.
Bonfiglio, Paolo; Pompoli, Francesco; Lionti, Riccardo
2016-04-01
The transfer matrix method is a well-established prediction tool for the simulation of sound transmission loss and the sound absorption coefficient of flat multilayer systems. Much research has been dedicated to enhancing the accuracy of the method by introducing a finite size effect of the structure to be simulated. The aim of this paper is to present a reduced-order integral formulation to predict radiation efficiency and radiation impedance for a panel with equal lateral dimensions. The results are presented and discussed for different materials in terms of radiation efficiency, sound transmission loss, and the sound absorption coefficient. Finally, the application of the proposed methodology for rectangular multilayer systems is also investigated and validated against experimental data.
The Trojan Horse method for nuclear astrophysics: Recent results for direct reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumino, A.; Gulino, M.; Spitaleri, C.
2014-05-09
The Trojan Horse method is a powerful indirect technique to determine the astrophysical factor for binary rearrangement processes A+x→b+B at astrophysical energies by measuring the cross section for the Trojan Horse (TH) reaction A+a→B+b+s in quasi free kinematics. The Trojan Horse Method has been successfully applied to many reactions of astrophysical interest, both direct and resonant. In this paper, we will focus on direct sub-processes. The theory of the THM for direct binary reactions will be shortly presented based on a few-body approach that takes into account the off-energy-shell effects and initial and final state interactions. Examples of recent resultsmore » will be presented to demonstrate how THM works experimentally.« less
Commercial Applications of Metal Foams: Their Properties and Production
García-Moreno, Francisco
2016-01-01
This work gives an overview of the production, properties and industrial applications of metal foams. First, it classifies the most relevant manufacturing routes and methods. Then, it reviews the most important properties, with special interest in the mechanical and functional aspects, but also taking into account costs and feasibility considerations. These properties are the motivation and basis of related applications. Finally, a summary of the most relevant applications showing a large number of actual examples is presented. Concluding, we can forecast a slow, but continuous growth of this industrial sector. PMID:28787887
Survey of simulation methods for modeling pulsed sieve-plate extraction columns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkhart, L.
1979-03-01
The report first considers briefly the use of liquid-liquid extraction in nuclear fuel reprocessing and then describes the operation of the pulse column. Currently available simulation models of the column are reviewed, and followed by an analysis of the information presently available from which the necessary parameters can be obtained for use in a model of the column. Finally, overall conclusions are given regarding the information needed to develop an accurate model of the column for materials accountability in fuel reprocessing plants. 156 references.
Numerically evaluating the bispectrum in curved field-space— with PyTransport 2.0
NASA Astrophysics Data System (ADS)
Ronayne, John W.; Mulryne, David J.
2018-01-01
We extend the transport framework for numerically evaluating the power spectrum and bispectrum in multi-field inflation to the case of a curved field-space metric. This method naturally accounts for all sub- and super-horizon tree level effects, including those induced by the curvature of the field-space. We present an open source implementation of our equations in an extension of the publicly available PyTransport code. Finally we illustrate how our technique is applied to examples of inflationary models with a non-trivial field-space metric.
Searching for and characterising extrasolar Earth-like planets and moons
NASA Astrophysics Data System (ADS)
Schneider, Jean
2002-10-01
The physical bases of the detection and characterisation of extrasolar Earth-like planets and moons in the reflected light and thermal emission regimes are reviewed. They both have their advantages and disadvantages, including artefacts, in the determination of planet physical parameters (mass, size, albedo, surface and atmospheric conditions etc.). After a short panorama of detection methods and the first findings, new perspectives for these different aspects are also presented. Finally brief account of the ground based programmes and space-based projects and their potentialities for Earth-like planets is made and discussed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Accounting System (49 CFR part 1201). (c) Final payment of financial assistance. (1) When a financial... a system to collect at branch level the data necessary to compute the base year data and the final...
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - I. Theory
Williams, Mark L.; Lee, Deokjung; Choi, Sooyoung
2015-03-04
A new methodology has been developed to treat resonance self-shielding in doubly heterogeneous very high temperature gas-cooled reactor systems in which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. This new method first homogenizes the fuel grain and matrix materials using an analytically derived disadvantage factor from a two-region problem with equivalence theory and intermediate resonance method. This disadvantage factor accounts for spatial self-shielding effects inside each grain within the framework of an infinite array of grains. Then the homogenized fuel compact is self-shielded using a Bondarenko method to accountmore » for interactions between the fuel compact regions in the fuel lattice. In the final form of the equations for actual implementations, the double-heterogeneity effects are accounted for by simply using a modified definition of a background cross section, which includes geometry parameters and cross sections for both the grain and fuel compact regions. With the new method, the doubly heterogeneous resonance self-shielding effect can be treated easily even with legacy codes programmed only for a singly heterogeneous system by simple modifications in the background cross section for resonance integral interpolations. This paper presents a detailed derivation of the new method and a sensitivity study of double-heterogeneity parameters introduced during the derivation. The implementation of the method and verification results for various test cases are presented in the companion paper.« less
26 CFR 1.446-2 - Method of accounting for interest.
Code of Federal Regulations, 2010 CFR
2010-04-01
... account by a taxpayer under the taxpayer's regular method of accounting (e.g., an accrual method or the... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Method of accounting for interest. 1.446-2... TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.446-2 Method of accounting for interest. (a...
Fu, Liya; Wang, You-Gan
2011-02-15
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which clearly demonstrates the advantages of the rank regression models.
NASA Astrophysics Data System (ADS)
Zakaria, Zarabizan; Ismail, Syuhaida; Yusof, Aminah Md
2013-04-01
Federal roads maintenance needs a systematic and effective mechanism to ensure that the roads are in good condition and provide comfort to the road user. In implementing effective maintenance, budget is main the factor limiting this endeavor. Thus Public Works Department (PWD) Malaysia used Highway Development and Management (HDM-4) System to help the management of PWD Malaysia in determining the location and length of the road to be repaired according to the priority based on its analysis. For that purpose, PWD Malaysia has applied Pavement Management System (PMS) which utilizes HDM-4 as the analysis engine to conduct technical and economic analysis in generating annual work programs for pavement maintenance. As a result, a lot of feedback and comment have been received from Supervisory and Roads Maintenance Unit (UPPJ) Zonal on the accuracy of the system output and problems that arise in the closing of final account. Therefore, the objective of this paper is to evaluate current system accuracy in terms of generating the annual work program for periodic pavement maintenance, to identify factors contributing to the system inaccuracy in selecting the location and length of roads that require for treatment and to propose improvement measures for the system accuracy. The factors affecting the closing of final account caused by result received from the pavement management system are also defined. The scope of this paper is on the existing HDM-4 System which cover four states specifically Perlis, Selangor, Kelantan and Johor which is analysed via the work program output data for the purpose of evaluating the system accuracy. The method used in this paper includes case study, interview, discussion and analysis of the HDM-4 System output data. This paper has identified work history not updated and the analysis is not using the current data as factors contributing to the system accuracy. From the result of this paper, it is found that HDM-4's system accuracy used by PWD Malaysia attains average 65 per cent only and had not achieved level that had been set by PWD Malaysia namely 80 per cent. Hence, this paper has revealed the causes of the occurrances in the pavement management system in construction project in Malaysia and investigated the consequences of the late payments and final account problems confronted by contractors in Malaysia, which eventually proposed strategic actions that could be taken by the contractors in securing their payments.
Newton, Joanna; Taylor, Rachel M; Crighton, Liz
2017-10-01
To investigate the current practice and experience of sign-off mentors in one NHS trust. In the UK, sign-off mentors support nursing students in their last clinical placement and are accountable for the final assessment of fitness to practice as a registered nurse. Mixed-methods study. The focus was on two key Nursing and Midwifery Council standards: the requirement for students to work at least 40% of their time on clinical placement with a sign-off mentor/mentor; the sign-off mentor had one-hour-per-week protected time to meet the final placement student. Data were collected through two audits of clinical and university documents and an experience survey administered to all sign-off mentors in one trust. The audits showed that only 22/42 (52%) of students were supervised by their sign-off mentor/mentor at least 40% of the time, whilst 10/42 (24%) students never worked a shift with their sign-off mentor. Only one student met their sign-off mentor every week. Complete data were available in 31/64 (47%) sign-off mentors, of whom 21/30 (70%) rarely/never had reduced clinical commitment to mentor final placement students. Furthermore, 19/28 (68%) met their student after their shift had ended with 24/30 (80%) reporting not getting any protected time. Sign-off mentors have inadequate time and resources to undertake their role, yet are accountable for confirming the student has the required knowledge and skills to practise safely. The current model needs urgent review to improve mentoring standards. Understanding how the role of the sign-off mentor is working in practice is critical to ensuring that the Nursing and Midwifery Council standards are met, ensuring students are well supported and appropriately assessed in practice, and mentoring is given the high profile it deserves to guarantee high-quality care and protecting the public. © 2016 John Wiley & Sons Ltd.
Dipnall, Joanna F.
2016-01-01
Background Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. Methods The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009–2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. Results After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). Conclusion The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin. PMID:26848571
49 CFR 92.9 - Exceptions to notice, hearing, written response, and final decision.
Code of Federal Regulations, 2010 CFR
2010-10-01
... collection by notifying his or her accounting or finance officer; or (2) Due to a normal ministerial... accounting or finance officer. (c) Limitation on exceptions. The exceptions described in paragraph (a) of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogle, Kiona; Pendall, Elise
Isotopic methods offer great potential for partitioning trace gas fluxes such as soil respiration into their different source contributions. Traditional partitioning methods face challenges due to variability introduced by different measurement methods, fractionation effects, and end-member uncertainty. To address these challenges, we describe in this paper a hierarchical Bayesian (HB) approach for isotopic partitioning of soil respiration that directly accommodates such variability. We apply our HB method to data from an experiment conducted in a shortgrass steppe ecosystem, where decomposition was previously shown to be stimulated by elevated CO 2. Our approach simultaneously fits Keeling plot (KP) models to observationsmore » of soil or soil-respired δ 13C and [CO 2] obtained via chambers and gas wells, corrects the KP intercepts for apparent fractionation (Δ) due to isotope-specific diffusion rates and/or method artifacts, estimates method- and treatment-specific values for Δ, propagates end-member uncertainty, and calculates proportional contributions from two distinct respiration sources (“old” and “new” carbon). The chamber KP intercepts were estimated with greater confidence than the well intercepts and compared to the theoretical value of 4.4‰, our results suggest that Δ varies between 2 and 5.2‰ depending on method (chambers versus wells) and CO 2 treatment. Because elevated CO 2 plots were fumigated with 13C-depleted CO 2, the source contributions were tightly constrained, and new C accounted for 64% (range = 55–73%) of soil respiration. The contributions were less constrained for the ambient CO 2 treatments, but new C accounted for significantly less (47%, range = 15–82%) of soil respiration. Finally, our new HB partitioning approach contrasts our original analysis (higher contribution of old C under elevated CO 2) because it uses additional data sources, accounts for end-member bias, and estimates apparent fractionation effects.« less
Code of Federal Regulations, 2012 CFR
2012-04-01
... utilizing T.D. 66-16 (see § 146.92(h)), and which takes into account any volumetric loss or gain. (d) Final... been separated into two or more final products. (k) Weighted average. “Weighted average” means the...
Code of Federal Regulations, 2011 CFR
2011-04-01
... utilizing T.D. 66-16 (see § 146.92(h)), and which takes into account any volumetric loss or gain. (d) Final... been separated into two or more final products. (k) Weighted average. “Weighted average” means the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... utilizing T.D. 66-16 (see § 146.92(h)), and which takes into account any volumetric loss or gain. (d) Final... been separated into two or more final products. (k) Weighted average. “Weighted average” means the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... utilizing T.D. 66-16 (see § 146.92(h)), and which takes into account any volumetric loss or gain. (d) Final... been separated into two or more final products. (k) Weighted average. “Weighted average” means the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... utilizing T.D. 66-16 (see § 146.92(h)), and which takes into account any volumetric loss or gain. (d) Final... been separated into two or more final products. (k) Weighted average. “Weighted average” means the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-31
... rule amends our regulations regarding Performance Accountability for title V of the Older Americans Act... on September 1, 2010. 75 FR 53786. Previously, an interim final rule (IFR) on performance measures... performance through regulation. OAA Sec. 513(b)(3). As established in the SCSEP Final Rule published September...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... mortality cap allocation for 2013. Final Research Set-Aside (RSA) allocations for a given year are typically not available until final specifications, and the exclusion of the final RSA allocation results in... to account for allocated butterfish RSA. The proposed rule included the 13-percent reduction to the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-05
... SOCIAL SECURITY ADMINISTRATION 20 CFR Part 416 [Docket No. SSA-2008-0050] RIN 0960-AE59... Payments for Certain Past- Due SSI Benefits AGENCY: Social Security Administration (SSA). ACTION: Final rules. SUMMARY: These final rules adopt, with some minor changes, the interim final rules with request...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-30
...The Federal Energy Regulatory Commission (Commission) is revising its regulations to foster competition and transparency in ancillary services markets. The Commission is revising certain aspects of its current market-based rate regulations, ancillary services requirements under the pro forma open-access transmission tariff (OATT), and accounting and reporting requirements. Specifically, the Commission is revising its regulations to reflect reforms to its Avista policy governing the sale of ancillary services at market-based rates to public utility transmission providers. The Commission is also requiring each public utility transmission provider to add to its OATT Schedule 3 a statement that it will take into account the speed and accuracy of regulation resources in its determination of reserve requirements for Regulation and Frequency Response service, including as it reviews whether a self-supplying customer has made ``alternative comparable arrangements'' as required by the Schedule. The final rule also requires each public utility transmission provider to post certain Area Control Error data as described in the final rule. Finally, the Commission is revising the accounting and reporting requirements under its Uniform System of Accounts for public utilities and licensees and its forms, statements, and reports, contained in FERC Form No. 1, Annual Report of Major Electric Utilities, Licensees and Others, FERC Form No. 1-F, Annual Report for Nonmajor Public Utilities and Licensees, and FERC Form No. 3-Q, Quarterly Financial Report of Electric Utilities, Licensees, and Natural Gas Companies, to better account for and report transactions associated with the use of energy storage devices in public utility operations.
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
Bringing English to Order: A Personal Account of the NCC English Evaluation Project.
ERIC Educational Resources Information Center
Clark, Urszula
1994-01-01
Provides a personal account of Great Britain's National Curriculum Committee's English Evaluation Project based at Warwick University. Describes the way the interim and final results of the committee were used by higher powers. (HB)
Calibration of decadal ensemble predictions
NASA Astrophysics Data System (ADS)
Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe
2017-04-01
Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case. PMID:26125631
Fault detection of Tennessee Eastman process based on topological features and SVM
NASA Astrophysics Data System (ADS)
Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen
2018-03-01
Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks.
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.
A revised version of the transfer matrix method to analyze one-dimensional structures
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1983-01-01
A new and general method to analyze both free and forced vibration characteristics of one-dimensional structures is discussed in this paper. This scheme links for the first time the classical transfer matrix method with the recently developed integrating matrix technique to integrate systems of differential equations. Two alternative approaches to the problem are presented. The first is based upon the lumped parameter model to account for the inertia properties of the structure. The second releases that constraint allowing a more precise description of the physical system. The free vibration of a straight uniform beam under different support conditions is analyzed to test the accuracy of the two models. Finally some results for the free vibration of a 12th order system representing a curved, rotating beam prove that the present method is conveniently extended to more complicated structural dynamics problems.
NASA Technical Reports Server (NTRS)
Bartlett, E. P.; Morse, H. L.; Tong, H.
1971-01-01
Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.
Automated segmentation of three-dimensional MR brain images
NASA Astrophysics Data System (ADS)
Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee
2006-03-01
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
Berges, Jürgen; Reygers, Klaus; Tanji, Naoto; ...
2017-05-09
Recent classical-statistical numerical simulations have established the “bottom-up” thermalization scenario of Baier et al. [Phys. Lett. B 502, 51 (2001)] as the correct weak coupling effective theory for thermalization in ultrarelativistic heavy-ion collisions. In this paper, we perform a parametric study of photon production in the various stages of this bottom-up framework to ascertain the relative contribution of the off-equilibrium “glasma” relative to that of a thermalized quark-gluon plasma. Taking into account the constraints imposed by the measured charged hadron multiplicities at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), we find that glasma contributions are importantmore » especially for large values of the saturation scale at both energies. Finally, these nonequilibrium effects should therefore be taken into account in studies where weak coupling methods are employed to compute photon yields.« less
Medicare program; clarification of Medicare's accrual basis of accounting policy--HCFA. Final rule.
1995-06-27
This final rule revises the Medicare regulations to clarify the concept of "accrual basis of accounting" to indicate that expenses must be incurred by a provider of health care services before Medicare will pay its share of those expenses. This rule does not signify a change in policy but, rather, incorporates into the regulations Medicare's longstanding policy regarding the circumstances under which we recognize, for the purposes of program payment, a provider's claim for costs for which it has not actually expended funds during the current cost reporting period.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
..., with manganese accounting for not more than 3.0 percent of total materials by weight. The subject... number 6 contains magnesium and silicon as the major alloying elements, with magnesium accounting for at least 0.1 percent but not more than 2.0 percent of total materials by weight, and silicon accounting for...
26 CFR 1.6655-6 - Methods of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of accounting method. Corporation ABC, a calendar year taxpayer, uses an accrual method of accounting... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Methods of accounting. 1.6655-6 Section 1.6655... Methods of accounting. (a) In general. In computing any required installment, a corporation must use the...
Simultaneous fault detection and control design for switched systems with two quantized signals.
Li, Jian; Park, Ju H; Ye, Dan
2017-01-01
The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Evaporation kinetics in the hanging drop method of protein crystal growth
NASA Technical Reports Server (NTRS)
Baird, James K.; Frieden, Richard W.; Meehan, E. J., Jr.; Twigg, Pamela J.; Howard, Sandra B.; Fowlis, William A.
1987-01-01
An engineering analysis of the rate of evaporation of solvent in the hanging drop method of protein crystal growth is presented; these results are applied to 18 different drop and well arrangements commonly encountered in the laboratory, taking into account the chemical nature of the salt, the drop size and shape, the drop concentration, the well size, the well concentration, and the temperature. It is found that the rate of evaporation increases with temperature, drop size, and with the salt concentration difference between the drop and the well. The evaporation possesses no unique half-life. Once the salt in the drop achieves about 80 percent of its final concentration, further evaporation suffers from the law of diminishing returns.
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.
Toward a Measure of Accountability in Nursing: A Three-Stage Validation Study.
Drach-Zahavy, Anat; Leonenko, Marina; Srulovici, Einav
2018-06-04
To develop and psychometrically evaluate a three-dimensional questionnaire suitable for evaluating personal and organizational accountability in nurses. Accountability is defined as a three-dimensional value, directing professionals to take responsibility for their decisions and actions, to be willing to explain them (transparency) and to be judged according to society's accepted values (answerability). Despite the relatively clear definition, measurement of accountability lags well behind. Existing self-report questionnaires do not fully capture the complexity of the concept; nor do they capture the different sources of accountability (e.g., personal accountability, organizational accountability). A three-stage measure development. Data were collected during 2015-2016. In Phase 1, an initial database of items (N = 74) was developed, based on literature review and qualitative study, establishing face and content validity. In Phase 2, the face, content, construct and criterion-related validity of the initial questionnaires (19 items for personal and organizational accountability questionnaire) was established with a sample of 229 nurses. In Phase 3, the final questionnaires (19 items each) were validated with a new sample of 329 nurses and established construct validity. The final version of the instruments comprised 19 items, suitable for assessing personal and organizational accountability. The questionnaire referred to the dimensions of responsibility, transparency and answerability. The findings established the instrument's content, construct and criterion-related validity, as well as good internal reliability. The questionnaire portrays accountability in nursing, by capturing nurses' subjective perceptions of accountability dimensions (responsibility, transparency, answerability), as demonstrated by personal and organizational values. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
48 CFR 1830.7001-4 - Postaward FCCOM applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SPACE ADMINISTRATION GENERAL CONTRACTING REQUIREMENTS COST ACCOUNTING STANDARDS ADMINISTRATION... cost of money factors are finalized, use the new factors to calculate FCCOM for the next accounting... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Postaward FCCOM...
Compressive strength of human openwedges: a selection method
NASA Astrophysics Data System (ADS)
Follet, H.; Gotteland, M.; Bardonnet, R.; Sfarghiu, A. M.; Peyrot, J.; Rumelhart, C.
2004-02-01
A series of 44 samples of bone wedges of human origin, intended for allograft openwedge osteotomy and obtained without particular precautions during hip arthroplasty were re-examined. After viral inactivity chemical treatment, lyophilisation and radio-sterilisation (intended to produce optimal health safety), the compressive strength, independent of age, sex and the height of the sample (or angle of cut), proved to be too widely dispersed [ 10{-}158 MPa] in the first study. We propose a method for selecting samples which takes into account their geometry (width, length, thicknesses, cortical surface area). Statistical methods (Principal Components Analysis PCA, Hierarchical Cluster Analysis, Multilinear regression) allowed final selection of 29 samples having a mean compressive strength σ_{max} =103 MPa ± 26 and with variation [ 61{-}158 MPa] . These results are equivalent or greater than average materials currently used in openwedge osteotomy.
ERIC Educational Resources Information Center
Blinn Coll., Brenham, TX.
Blinn College final course grade distributions are summarized for spring 1990 to 1994 in this four-part report. Section I presents tables of final grade distributions by campus and course in accounting; agriculture; anthropology; biology; business; chemistry; child development; communications; computer science; criminal justice; drama; emergency…
Finite Element analyses of soil bioengineered slopes
NASA Astrophysics Data System (ADS)
Tamagnini, Roberto; Switala, Barbara Maria; Sudan Acharya, Madhu; Wu, Wei; Graf, Frank; Auer, Michael; te Kamp, Lothar
2014-05-01
Soil Bioengineering methods are not only effective from an economical point of view, but they are also interesting as fully ecological solutions. The presented project is aimed to define a numerical model which includes the impact of vegetation on slope stability, considering both mechanical and hydrological effects. In this project, a constitutive model has been developed that accounts for the multi-phase nature of the soil, namely the partly saturated condition and it also includes the effects of a biological component. The constitutive equation is implemented in the Finite Element (FE) software Comes-Geo with an implicit integration scheme that accounts for the collapse of the soils structure due to wetting. The mathematical formulation of the constitutive equations is introduced by means of thermodynamics and it simulates the growth of the biological system during the time. The numerical code is then applied in the analysis of an ideal rainfall induced landslide. The slope is analyzed for vegetated and non-vegetated conditions. The final results allow to quantitatively assessing the impact of vegetation on slope stability. This allows drawing conclusions and choosing whenever it is worthful to use soil bioengineering methods in slope stabilization instead of traditional approaches. The application of the FE methods show some advantages with respect to the commonly used limit equilibrium analyses, because it can account for the real coupled strain-diffusion nature of the problem. The mechanical strength of roots is in fact influenced by the stress evolution into the slope. Moreover, FE method does not need a pre-definition of any failure surface. FE method can also be used in monitoring the progressive failure of the soil bio-engineered system as it calculates the amount of displacements and strains of the model slope. The preliminary study results show that the formulated equations can be useful for analysis and evaluation of different soil bio-engineering methods of slope stabilization.
Imputation approaches for animal movement modeling
Scharf, Henry; Hooten, Mevin B.; Johnson, Devin S.
2017-01-01
The analysis of telemetry data is common in animal ecological studies. While the collection of telemetry data for individual animals has improved dramatically, the methods to properly account for inherent uncertainties (e.g., measurement error, dependence, barriers to movement) have lagged behind. Still, many new statistical approaches have been developed to infer unknown quantities affecting animal movement or predict movement based on telemetry data. Hierarchical statistical models are useful to account for some of the aforementioned uncertainties, as well as provide population-level inference, but they often come with an increased computational burden. For certain types of statistical models, it is straightforward to provide inference if the latent true animal trajectory is known, but challenging otherwise. In these cases, approaches related to multiple imputation have been employed to account for the uncertainty associated with our knowledge of the latent trajectory. Despite the increasing use of imputation approaches for modeling animal movement, the general sensitivity and accuracy of these methods have not been explored in detail. We provide an introduction to animal movement modeling and describe how imputation approaches may be helpful for certain types of models. We also assess the performance of imputation approaches in two simulation studies. Our simulation studies suggests that inference for model parameters directly related to the location of an individual may be more accurate than inference for parameters associated with higher-order processes such as velocity or acceleration. Finally, we apply these methods to analyze a telemetry data set involving northern fur seals (Callorhinus ursinus) in the Bering Sea. Supplementary materials accompanying this paper appear online.
NASA Astrophysics Data System (ADS)
Sheikholeslami, M.; Ganji, D. D.
2017-12-01
In this paper, semi analytical approach is applied to investigate nanofluid Marangoni convection in presence of magnetic field. Koo-Kleinstreuer-Li model is taken into account to simulate nanofluid properties. Homotopy analysis method is utilized to solve the final ordinary equations which are obtained from similarity transformation. Roles of Hartmann number and nanofluid volume fraction are presented graphically. Results show that temperature augments with rise of nanofluid volume fraction. Impact of nanofluid volume fraction on normal velocity is more than tangential velocity. Temperature gradient enhances with rise of magnetic number.
Research on optimal investment path of transmission corridor under the global energy Internet
NASA Astrophysics Data System (ADS)
Huang, Yuehui; Li, Pai; Wang, Qi; Liu, Jichun; Gao, Han
2018-02-01
Under the background of the global energy Internet, the investment planning of transmission corridor from XinJiang to Germany is studied in this article, which passes through four countries: Kazakhstan, Russia, Belarus and Poland. Taking the specific situation of different countries into account, including the length of transmission line, unit construction cost, completion time, transmission price, state tariff, inflation rate and so on, this paper constructed a power transmission investment model. Finally, the dynamic programming method is used to simulate the example, and the optimal strategies under different objective functions are obtained.
Intercode comparison of gyrokinetic global electromagnetic modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Görler, T., E-mail: tobias.goerler@ipp.mpg.de; Tronko, N.; Hornsby, W. A.
Aiming to fill a corresponding lack of sophisticated test cases for global electromagnetic gyrokinetic codes, a new hierarchical benchmark is proposed. Starting from established test sets with adiabatic electrons, fully gyrokinetic electrons, and electrostatic fluctuations are taken into account before finally studying the global electromagnetic micro-instabilities. Results from up to five codes involving representatives from different numerical approaches as particle-in-cell methods, Eulerian and Semi-Lagrangian are shown. By means of spectrally resolved growth rates and frequencies and mode structure comparisons, agreement can be confirmed on ion-gyro-radius scales, thus providing confidence in the correct implementation of the underlying equations.
Risk Assessment During the Final Phase of an Uncontrolled Re-Entry
NASA Astrophysics Data System (ADS)
Gaudel, A.; Hourtolle, C.; Goester, J. F.; Fuentes, N.
2013-09-01
As French National Space Agency, CNES is empowered to monitor compliance with technical regulations of the French Space Operation Act, FSOA, and to take all necessary measures to ensure the safety of people, property, public health and environment for all space operations involving French responsibility at international level.Therefore, CNES developed ELECTRA that calculates the risk for ground population involved in three types of events: rocket launching, controlled re-entry and uncontrolled re-entry. For the first two cases, ELECTRA takes into account degraded cases due to a premature stop of propulsion.Major evolutions were implemented recently on ELECTRA to meet new users' requirements, like the risk assessment during the final phase of uncontrolled re-entry, that can be combined with the computed risk for each country involved by impacts.The purpose of this paper is to provide an overview of the ELECTRA method and main functionalities, and then to highlight these recent improvements.
Generalised syntheses of ordered mesoporous oxides: the atrane route
NASA Astrophysics Data System (ADS)
Cabrera, Saúl; El Haskouri, Jamal; Guillem, Carmen; Latorre, Julio; Beltrán-Porter, Aurelio; Beltrán-Porter, Daniel; Marcos, M. Dolores; Amorós *, Pedro
2000-06-01
A new simple and versatile technique to obtain mesoporous oxides is presented. While implying surfactant-assisted formation of mesostructured intermediates, the original chemical contribution of this approach lies in the use of atrane complexes as precursors. Without prejudice to their inherent unstability in aqueous solution, the atranes show a marked inertness towards hydrolysis. Bringing kinetic factors into play, it becomes possible to control the processes involved in the formation of the surfactant-inorganic phase composite micelles, which constitute the elemental building blocks of the mesostructures. Independent of the starting compositional complexity, both the mesostructured intermediates and the final mesoporous materials are chemically homogeneous. The final ordered mesoporous materials are thermally stable and show unimodal porosity, as well as homogeneous microstructure and texture. Examples of materials synthesised on account of the versatility of this new method, including siliceous, non siliceous and mixed oxides, are presented and discussed.
Correlation Energies from the Two-Component Random Phase Approximation.
Kühn, Michael
2014-02-11
The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.
Winters, Eric R; Petosa, Rick L; Charlton, Thomas E
2003-06-01
To examine whether knowledge of high school students' actions of self-regulation, and perceptions of self-efficacy to overcome exercise barriers, social situation, and outcome expectation will predict non-school related moderate and vigorous physical exercise. High school students enrolled in introductory Physical Education courses completed questionnaires that targeted selected Social Cognitive Theory variables. They also self-reported their typical "leisure-time" exercise participation using a standardized questionnaire. Bivariate correlation statistic and hierarchical regression were conducted on reports of moderate and vigorous exercise frequency. Each predictor variable was significantly associated with measures of moderate and vigorous exercise frequency. All predictor variables were significant in the final regression model used to explain vigorous exercise. After controlling for the effects of gender, the psychosocial variables explained 29% of variance in vigorous exercise frequency. Three of four predictor variables were significant in the final regression equation used to explain moderate exercise. The final regression equation accounted for 11% of variance in moderate exercise frequency. Professionals who attempt to increase the prevalence of physical exercise through educational methods should focus on the psychosocial variables utilized in this study.
25 CFR 170.605 - When may BIA use force account methods in the IRR Program?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false When may BIA use force account methods in the IRR Program... § 170.605 When may BIA use force account methods in the IRR Program? BIA may use force account methods... account project activities. ...
Not simply more of the same: distinguishing between patient heterogeneity and parameter uncertainty.
Vemer, Pepijn; Goossens, Lucas M A; Rutten-van Mölken, Maureen P M H
2014-11-01
In cost-effectiveness (CE) Markov models, heterogeneity in the patient population is not automatically taken into account. We aimed to compare methods of dealing with heterogeneity on estimates of CE, using a case study in chronic obstructive pulmonary disease (COPD). We first present a probabilistic sensitivity analysis (PSA) in which we sampled only from distributions representing parameter uncertainty. This ignores any heterogeneity. Next, we explored heterogeneity by presenting results for subgroups, using a method that samples parameter uncertainty simultaneously with heterogeneity in a single-loop PSA. Finally, we distinguished parameter uncertainty from heterogeneity in a double-loop PSA by performing a nested simulation within each PSA iteration. Point estimates and uncertainty differed substantially between methods. The incremental CE ratio (ICER) ranged from € 4900 to € 13,800. The single-loop PSA led to a substantially different shape of the CE plane and an overestimation of the uncertainty compared with the other 3 methods. The CE plane for the double-loop PSA showed substantially less uncertainty and a stronger negative correlation between the difference in costs and the difference in effects compared with the other methods. This came at the cost of higher calculation times. Not accounting for heterogeneity, subgroup analysis and the double-loop PSA can be viable options, depending on the decision makers' information needs. The single-loop PSA should not be used in CE research. It disregards the fundamental differences between heterogeneity and sampling uncertainty and overestimates uncertainty as a result. © The Author(s) 2014.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-05
... restructuring program. Following completion of the Large Account's restructuring, information must be provided... DEPARTMENT OF LABOR Employee Benefits Security Administration Proposed Extension of Information...; Final Rules and Class Prohibited Transaction Exemption 2006-16 Relating to Terminated Individual Account...
75 FR 44065 - Uniformed Services Accounts and Death Benefits
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Benefits AGENCY: Federal Retirement Thrift Investment Board. ACTION: Final rule. SUMMARY: The Federal Retirement Thrift Investment Board (Agency) is making several changes to its death benefits regulations. In... both accounts. The Agency is also amending its death benefit regulations to allow participants to...
26 CFR 1.818-2 - Accounting provisions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Accounting provisions. 1.818-2 Section 1.818-2...) INCOME TAXES Miscellaneous Provisions § 1.818-2 Accounting provisions. (a) Method of accounting. (1... accounting. Thus, the over-all method of accounting for life insurance companies shall be the accrual method...
7 CFR 4284.12 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... period ends. (b) Semi-annual performance reports that compare accomplishments to the objectives stated in..., articles of incorporation and bylaws and an accounting of how working capital funds were spent. (c) Final project performance reports, inclusive of supporting documentation. The final performance report is due...
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
Kieslich, Katharina; Littlejohns, Peter
2015-07-10
Clinical commissioning groups (CCGs) in England are tasked with making difficult decisions on which healthcare services to provide against the background of limited budgets. The question is how to ensure that these decisions are fair and legitimate. Accounts of what constitutes fair and legitimate priority setting in healthcare include Daniels' and Sabin's accountability for reasonableness (A4R) and Clark's and Weale's framework for the identification of social values. This study combines these accounts and asks whether the decisions of those CCGs that adhere to elements of such accounts are perceived as fairer and more legitimate by key stakeholders. The study addresses the empirical gap arising from a lack of research on whether frameworks such as A4R hold what they promise. It aims to understand the criteria that feature in CCG decision-making. Finally, it examines the usefulness of a decision-making audit tool (DMAT) in identifying the process and content criteria that CCGs apply when making decisions. The adherence of a sample of CCGs to criteria emerging from theories of fair priority setting will be examined using the DMAT developed by PL. The results will be triangulated with data from semistructured interviews with key stakeholders in the CCG sample to ascertain whether there is a correlation between those CCGs that performed well in the DMAT exercise and those whose decisions are perceived positively by interviewees. Descriptive statistical methods will be used to analyse the DMAT data. A combination of quantitative and qualitative content analysis methods will be used to analyse the interview transcripts. Full ethics approval was received by the King's College London Biomedical Sciences, Dentistry, Medicine and Natural and Mathematical Sciences Research Ethics Subcommittee. The results of the study will be disseminated through publications in peer review journals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
2014-01-01
Background Premarital sexual behaviors are important issue for women’s health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. Method This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. Results In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach’s alpha coefficient for the instrument was found to be 0.87. Conclusion This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women’s sexual abstinence. PMID:24924696
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise
Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis
2014-01-01
A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuxuan; Martin, William; Williams, Mark
In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less
32 CFR 216.5 - Responsibilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Finance and Accounting Service that a final determination will be made so those offices can make... Office of the Deputy Chief Financial Officer, DoD, and the Director, Defense Finance and Accounting... list of names of affected institutions that have changed their policies or practices such that they are...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM Account § 115.620 If you...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM Account § 115.620 If you...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM Account § 115.620 If you...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM Account § 115.620 If you...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM Account § 115.620 If you...
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.
General approach and scope. [rotor blade design optimization
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.
Mineral inversion for element capture spectroscopy logging based on optimization theory
NASA Astrophysics Data System (ADS)
Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning
2017-12-01
Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
Tumor growth model for atlas based registration of pathological brain MR images
NASA Astrophysics Data System (ADS)
Moualhi, Wafa; Ezzeddine, Zagrouba
2015-02-01
The motivation of this work is to register a tumor brain magnetic resonance (MR) image with a normal brain atlas. A normal brain atlas is deformed in order to take account of the presence of a large space occupying tumor. The method use a priori model of tumor growth assuming that the tumor grows in a radial way from a starting point. First, an affine transformation is used in order to bring the patient image and the brain atlas in a global correspondence. Second, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. Finally, the seeded atlas is deformed combining a method derived from optical flow principles and a model for tumor growth (MTG). Results show that an automatic segmentation method of brain structures in the presence of large deformation can be provided.
Compromise decision support problems for hierarchical design involving uncertainty
NASA Astrophysics Data System (ADS)
Vadde, S.; Allen, J. K.; Mistree, F.
1994-08-01
In this paper an extension to the traditional compromise Decision Support Problem (DSP) formulation is presented. Bayesian statistics is used in the formulation to model uncertainties associated with the information being used. In an earlier paper a compromise DSP that accounts for uncertainty using fuzzy set theory was introduced. The Bayesian Decision Support Problem is described in this paper. The method for hierarchical design is demonstrated by using this formulation to design a portal frame. The results are discussed and comparisons are made with those obtained using the fuzzy DSP. Finally, the efficacy of incorporating Bayesian statistics into the traditional compromise DSP formulation is discussed and some pending research issues are described. Our emphasis in this paper is on the method rather than the results per se.
Personality, Motivation, and Math Achievement Among Turkish Students.
Akben-Selcuk, Elif
2017-04-01
Using the Turkish portion of the Programme for International Student Assessment dataset ( N = 4,848; 51% boys, 49% girls; age, M = 15.81 years, SD = 0.28), this study investigated factors associated with mathematics achievement among Turkish students. Three different models were estimated using the method of balanced repeated replication with Fay's method and taking into account the presence of five plausible values of the dependent variable. Results showed that male students and older students had better mathematics proficiency. Socio-economic status and school resources also played a significant role in explaining student achievement in mathematics. Finally, students who were more open to problem solving, who attributed their failure to external factors, and who were intrinsically motivated to learn mathematics achieved higher scores. Policy implications are provided.
Efficient methods for joint estimation of multiple fundamental frequencies in music signals
NASA Astrophysics Data System (ADS)
Pertusa, Antonio; Iñesta, José M.
2012-12-01
This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
4 CFR 28.41 - Explanation, scope and methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4 Accounts 1 2010-01-01 2010-01-01 false Explanation, scope and methods. 28.41 Section 28.41 Accounts GOVERNMENT ACCOUNTABILITY OFFICE GENERAL PROCEDURES GOVERNMENT ACCOUNTABILITY OFFICE PERSONNEL... ACCOUNTABILITY OFFICE Procedures Discovery § 28.41 Explanation, scope and methods. (a) Explanation. Discovery is...
2010-01-01
Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/. PMID:20089173
26 CFR 1.446-1 - General rule for methods of accounting.
Code of Federal Regulations, 2011 CFR
2011-04-01
... books. For requirement respecting the adoption or change of accounting method, see section 446(e) and... taxpayer to adopt or change to a method of accounting permitted by this chapter although the method is not..., which require the prior approval of the Commissioner in the case of changes in accounting method. (iii...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... National School Lunch Program: School Food Service Account Revenue Amendments Related to the Healthy, Hunger-Free Kids Act of 2010; Approval of Information Collection Request AGENCY: Food and Nutrition Service, USDA. ACTION: Interim final rule; approval of information collection request. SUMMARY: The Food...
32 CFR 726.9 - Reports and supervision of trustees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF AMOUNTS DUE MENTALLY INCOMPETENT MEMBERS OF THE NAVAL SERVICE § 726.9 Reports and supervision of... the final accounting report, the trustee and the surety will be discharged from liability. (b) Failure... notification of default, the account may be forwarded to the Department of Justice for recovery of funds...
Student Personality Type versus Grading Procedures in Intermediate Accounting Courses.
ERIC Educational Resources Information Center
Lawrence, Robyn; Taylor, Larry W.
2000-01-01
The personality preferences and temperaments of 82 intermediate accounting students were identified by the Myers Briggs Type Indicator and Keirsey Temperament Sorter. Relationships were found between personality variables and the number of class absences, class participation, and the performance in homework and problems on the final examination.…
Personal Accountability in Education: Measure Development and Validation
ERIC Educational Resources Information Center
Rosenblatt, Zehava
2017-01-01
Purpose: The purpose of this paper, three-study research project, is to establish and validate a two-dimensional scale to measure teachers' and school administrators' accountability disposition. Design/methodology/approach: The scale items were developed in focus groups, and the final measure was tested on various samples of Israeli teachers and…
75 FR 9120 - Electronic Fund Transfers
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-01
...), 17(b)(4)--General Rule and Scope of Opt-In; Notice and Opt-In Requirements Section 205.17(b)(1) of the Regulation E final rule sets forth the general rule prohibiting an account-holding financial... have imperfect account balance information, the Board stated that financial institutions are in a...
Alabama Vocational Management Information System. Final Report.
ERIC Educational Resources Information Center
Patterson, Douglas; And Others
A project was developed to design and implement a management information system (MIS) to provide decision makers with accurate, usable, and timely data and information concerning input, output, and impact of vocational education. The objectives were to (1) design an MIS embracing student accounting, fiscal accounting, manpower analysis, and…
76 FR 28888 - Amendment to Procedures for Holding Funds in Dormant Filing Fee Accounts
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... SECURITIES AND EXCHANGE COMMISSION 17 CFR Part 202 [Release Nos. 33-9208; 34-64495; IC-29670] Amendment to Procedures for Holding Funds in Dormant Filing Fee Accounts AGENCY: Securities and Exchange Commission. ACTION: Final rule. SUMMARY: The Securities and Exchange Commission is amending its procedures...
76 FR 16856 - Proposed Collection; Comment Request for Regulation Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... Reduction Act of 1995, Public Law 104-13 (44 U.S.C. 3506(c)(2)(A)). Currently, the IRS is soliciting comments concerning existing final regulations, REG-208156-91 (TD 8929), Accounting for Long-Term Contracts... the Internet at [email protected] . SUPPLEMENTARY INFORMATION: Title: Accounting for Long-Term...
Carbon emissions risk map from deforestation in the tropical Amazon
NASA Astrophysics Data System (ADS)
Ometto, J.; Soler, L. S.; Assis, T. D.; Oliveira, P. V.; Aguiar, A. P.
2011-12-01
Assis, Pedro Valle This work aims to estimate the carbon emissions from tropical deforestation in the Brazilian Amazon associated to the risk assessment of future land use change. The emissions are estimated by incorporating temporal deforestation dynamics, accounting for the biophysical and socioeconomic heterogeneity in the region, as well secondary forest growth dynamic in abandoned areas. The land cover change model that supported the risk assessment of deforestation, was run based on linear regressions. This method takes into account spatial heterogeneity of deforestation as the spatial variables adopted to fit the final regression model comprise: environmental aspects, economic attractiveness, accessibility and land tenure structure. After fitting a suitable regression models for each land cover category, the potential of each cell to be deforested (25x25km and 5x5 km of resolution) in the near future was used to calculate the risk assessment of land cover change. The carbon emissions model combines high-resolution new forest clear-cut mapping and four alternative sources of spatial information on biomass distribution for different vegetation types. The risk assessment map of CO2 emissions, was obtained by crossing the simulation results of the historical land cover changes to a map of aboveground biomass contained in the remaining forest. This final map represents the risk of CO2 emissions at 25x25km and 5x5 km until 2020, under a scenario of carbon emission reduction target.
NASA Astrophysics Data System (ADS)
Oudin, Ludovic; Michel, Claude; Andréassian, Vazken; Anctil, François; Loumagne, Cécile
2005-12-01
An implementation of the complementary relationship hypothesis (Bouchet's hypothesis) for estimating regional evapotranspiration within two rainfall-runoff models is proposed and evaluated in terms of streamflow simulation efficiency over a large sample of 308 catchments located in Australia, France and the USA. Complementary relationship models are attractive approaches to estimating actual evapotranspiration because they rely solely on climatic variables. They are even more interesting since they are supported by a conceptual description underlying the interactions between the evapotranspirating surface and the atmospheric boundary layer, which was highlighted by Bouchet (1963). However, these approaches appear to be in contradiction with the methods prevailing in rainfall-runoff models, which compute actual evapotranspiration using soil moisture accounting procedures. The approach adopted in this article is to introduce the estimation of actual evapotranspiration provided by complementary relationship models (complementary relationship for areal evapotranspiration and advection aridity) into two rainfall-runoff models. Results show that directly using the complementary relationship approach to estimate actual evapotranspiration does not give better results than the soil moisture accounting procedures. Finally, we discuss feedback mechanisms between potential evapotranspiration and soil water availability, and their possible impact on rainfall-runoff modelling. Copyright
Characterising the online weapons trafficking on cryptomarkets.
Rhumorbarbe, Damien; Werner, Denis; Gilliéron, Quentin; Staehli, Ludovic; Broséus, Julian; Rossy, Quentin
2018-02-01
Weapons related webpages from nine cryptomarkets were manually duplicated in February 2016. Information about the listings (i.e. sales proposals) and vendors' profiles were extracted to draw an overview of the actual online trafficking of weapons. Relationships between vendors were also inferred through the analysis of online digital traces and content similarities. Weapons trafficking is mainly concentrated on two major cryptomarkets. Besides, it accounts for a very small proportion of the illicit trafficking on cryptomarkets compared to the illicit drugs trafficking. Among all weapon related listings (n=386), firearms only account for approximately 25% of sales proposal since the proportion of non-lethal and melee weapons is important (around 46%). Based on the recorded pseudonyms, a total of 96 vendor profiles were highlighted. Some pseudonyms were encountered on several cryptomarkets, suggesting that some vendors may manage accounts on different markets. This hypothesis was strengthened by comparing pseudonyms to online traces such as PGP keys, images and profiles descriptions. Such a method allowed to estimate more accurately the number of vendors offering weapons across cryptomarkets. Finally, according to the gathered data, the extent of the weapons trafficking on the cryptomarkets appear to be limited compared to other illicit goods. Copyright © 2017 Elsevier B.V. All rights reserved.
Robust Flutter Analysis for Aeroservoelastic Systems
NASA Astrophysics Data System (ADS)
Kotikalpudi, Aditya
The dynamics of a flexible air vehicle are typically described using an aeroservoelastic model which accounts for interaction between aerodynamics, structural dynamics, rigid body dynamics and control laws. These subsystems can be individually modeled using a theoretical approach and experimental data from various ground tests can be combined into them. For instance, a combination of linear finite element modeling and data from ground vibration tests may be used to obtain a validated structural model. Similarly, an aerodynamic model can be obtained using computational fluid dynamics or simple panel methods and partially updated using limited data from wind tunnel tests. In all cases, the models obtained for these subsystems have a degree of uncertainty owing to inherent assumptions in the theory and errors in experimental data. Suitable uncertain models that account for these uncertainties can be built to study the impact of these modeling errors on the ability to predict dynamic instabilities known as flutter. This thesis addresses the methods used for modeling rigid body dynamics, structural dynamics and unsteady aerodynamics of a blended wing design called the Body Freedom Flutter vehicle. It discusses the procedure used to incorporate data from a wide range of ground based experiments in the form of model uncertainties within these subsystems. Finally, it provides the mathematical tools for carrying out flutter analysis and sensitivity analysis which account for these model uncertainties. These analyses are carried out for both open loop and controller in the loop (closed loop) cases.
Marshall, Andrew J; Evanovich, Emma K; David, Sarah Jo; Mumma, Gregory H
2018-01-17
High comorbidity rates among emotional disorders have led researchers to examine transdiagnostic factors that may contribute to shared psychopathology. Bifactor models provide a unique method for examining transdiagnostic variables by modelling the common and unique factors within measures. Previous findings suggest that the bifactor model of the Depression Anxiety and Stress Scale (DASS) may provide a method for examining transdiagnostic factors within emotional disorders. This study aimed to replicate the bifactor model of the DASS, a multidimensional measure of psychological distress, within a US adult sample and provide initial estimates of the reliability of the general and domain-specific factors. Furthermore, this study hypothesized that Worry, a theorized transdiagnostic variable, would show stronger relations to general emotional distress than domain-specific subscales. Confirmatory factor analysis was used to evaluate the bifactor model structure of the DASS in 456 US adult participants (279 females and 177 males, mean age 35.9 years) recruited online. The DASS bifactor model fitted well (CFI = 0.98; RMSEA = 0.05). The General Emotional Distress factor accounted for most of the reliable variance in item scores. Domain-specific subscales accounted for modest portions of reliable variance in items after accounting for the general scale. Finally, structural equation modelling indicated that Worry was strongly predicted by the General Emotional Distress factor. The DASS bifactor model is generalizable to a US community sample and General Emotional Distress, but not domain-specific factors, strongly predict the transdiagnostic variable Worry.
15 CFR 764.5 - Voluntary self-disclosure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... retained by the person making the disclosure until OEE requests them, or until a final decision on the disclosed information has been made. After a final decision, the documents should be maintained in... account and supporting documentation. If the person making the disclosure believes otherwise, a request...
26 CFR 1.9002-1 - Purpose, applicability, and definitions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... compute, taxable income under an accrual method of accounting, and (2) treated dealer reserve income (or portions thereof) which should have been taken into account (under the accrual method of accounting) for... accounting or who was not required to compute taxable income under the accrual method of accounting. An...
Emerging accounting trends accounting for leases.
Valletta, Robert; Huggins, Brian
2010-12-01
A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.
26 CFR 1.267(a)-1 - Deductions disallowed.
Code of Federal Regulations, 2010 CFR
2010-04-01
... accrual method of accounting. For example, if the accrued expenses or interest are paid after the... an accrual method of accounting. A uses a combination of accounting methods permitted under section... disbursements method of accounting with respect to such items of gross income for his taxable year in which or...
Hong, Zhiheng; Ni, Daxin; Cao, Yang; Meng, Ling; Tu, Wenxiao; Li, Leilei; Li, Qun; Jin, Lianmei
2015-06-01
To establish a comprehensive evaluation index system for the China Public Health Emergency Events Surveillance System (CPHEESS). A draft index system was built through literature review and under the consideration of the characteristics on CPHEESS. Delphi method was adapted to determine the final index system. The index system was divided into primary, secondary and tertiary levels. There were 4 primary indicators: System structure, Network platform, Surveillance implementation reports with Data analysis and utilization. There were 16 secondary and 70 tertiary indicators being set, with System structure including 14 tertiary indicators (accounted for 20.00%), 21 Network platforms (accounted for 30.00%). Twenty-four Surveillance implementation reports (accounted for 34.29%), 11 Data analysis and utilization (accounted for 15.71%). The average score of importance of each indicators was 4.29 (3.77-4.94), with an average coefficient variation as 0.14 (0.12-0.16). The mean Chronbach's α index was 0.84 (0.81-0.89). The adaptability of each related facilities indicator was specified. The primary indicators were set in accordance with the characteristics and goals of the surveillance systems. Secondary indicators provided key elements in the management and control of the system while the tertiary indicators were available and operative. The agreement rate of experts was high with good validity and reliability. This index system could be used for CPHEESS in future.
NASA Astrophysics Data System (ADS)
Nguyen, Thi-Thuy-My; Gandin, Charles-André; Combeau, Hervé; Založnik, Miha; Bellet, Michel
2018-02-01
The transport of solid crystals in the liquid pool during solidification of large ingots is known to have a significant effect on their final grain structure and macrosegregation. Numerical modeling of the associated physics is challenging since complex and strong interactions between heat and mass transfer at the microscopic and macroscopic scales must be taken into account. The paper presents a finite element multi-scale solidification model coupling nucleation, growth, and solute diffusion at the microscopic scale, represented by a single unique grain, while also including transport of the liquid and solid phases at the macroscopic scale of the ingots. The numerical resolution is based on a splitting method which sequentially describes the evolution and interaction of quantities into a transport and a growth stage. This splitting method reduces the non-linear complexity of the set of equations and is, for the first time, implemented using the finite element method. This is possible due to the introduction of an artificial diffusion in all conservation equations solved by the finite element method. Simulations with and without grain transport are compared to demonstrate the impact of solid phase transport on the solidification process as well as the formation of macrosegregation in a binary alloy (Sn-5 wt pct Pb). The model is also applied to the solidification of the binary alloy Fe-0.36 wt pct C in a domain representative of a 3.3-ton steel ingot.
42 CFR § 512.307 - Subsequent calculations.
Code of Federal Regulations, 2010 CFR
2017-10-01
... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Pricing and Payment § 512.307... the initial NPRA, using claims data and non-claims-based payment data available at that time, to account for final claims run-out, final changes in non-claims-based payment data, and any additional...
The Effects of Written Comments on Student Performance.
ERIC Educational Resources Information Center
Leauby, Bruce A.; Atkinson, Maryanne
1989-01-01
Three accounting teachers gave two tests and a comprehensive final to 417 undergraduates using one of three treatments: no comments written on test paper, comments at teacher's discretion, or standard comments. The type of comment did not affect subsequent test performance, but did significantly affect performance on final exam, especially for…
Sensing Attribute Weights: A Novel Basic Belief Assignment Method
Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe; Wu, Jun
2017-01-01
Dempster–Shafer evidence theory is widely used in many soft sensors data fusion systems on account of its good performance for handling the uncertainty information of soft sensors. However, how to determine basic belief assignment (BBA) is still an open issue. The existing methods to determine BBA do not consider the reliability of each attribute; at the same time, they cannot effectively determine BBA in the open world. In this paper, based on attribute weights, a novel method to determine BBA is proposed not only in the closed world, but also in the open world. The Gaussian model of each attribute is built using the training samples firstly. Second, the similarity between the test sample and the attribute model is measured based on the Gaussian membership functions. Then, the attribute weights are generated using the overlap degree among the classes. Finally, BBA is determined according to the sensed attribute weights. Several examples with small datasets show the validity of the proposed method. PMID:28358325
Sensing Attribute Weights: A Novel Basic Belief Assignment Method.
Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe; Wu, Jun
2017-03-30
Dempster-Shafer evidence theory is widely used in many soft sensors data fusion systems on account of its good performance for handling the uncertainty information of soft sensors. However, how to determine basic belief assignment (BBA) is still an open issue. The existing methods to determine BBA do not consider the reliability of each attribute; at the same time, they cannot effectively determine BBA in the open world. In this paper, based on attribute weights, a novel method to determine BBA is proposed not only in the closed world, but also in the open world. The Gaussian model of each attribute is built using the training samples firstly. Second, the similarity between the test sample and the attribute model is measured based on the Gaussian membership functions. Then, the attribute weights are generated using the overlap degree among the classes. Finally, BBA is determined according to the sensed attribute weights. Several examples with small datasets show the validity of the proposed method.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Yu, Yun; Degnan, James H.; Nakhleh, Luay
2012-01-01
Gene tree topologies have proven a powerful data source for various tasks, including species tree inference and species delimitation. Consequently, methods for computing probabilities of gene trees within species trees have been developed and widely used in probabilistic inference frameworks. All these methods assume an underlying multispecies coalescent model. However, when reticulate evolutionary events such as hybridization occur, these methods are inadequate, as they do not account for such events. Methods that account for both hybridization and deep coalescence in computing the probability of a gene tree topology currently exist for very limited cases. However, no such methods exist for general cases, owing primarily to the fact that it is currently unknown how to compute the probability of a gene tree topology within the branches of a phylogenetic network. Here we present a novel method for computing the probability of gene tree topologies on phylogenetic networks and demonstrate its application to the inference of hybridization in the presence of incomplete lineage sorting. We reanalyze a Saccharomyces species data set for which multiple analyses had converged on a species tree candidate. Using our method, though, we show that an evolutionary hypothesis involving hybridization in this group has better support than one of strict divergence. A similar reanalysis on a group of three Drosophila species shows that the data is consistent with hybridization. Further, using extensive simulation studies, we demonstrate the power of gene tree topologies at obtaining accurate estimates of branch lengths and hybridization probabilities of a given phylogenetic network. Finally, we discuss identifiability issues with detecting hybridization, particularly in cases that involve extinction or incomplete sampling of taxa. PMID:22536161
Miaux, Sylvie; Drouin, Louis; Morency, Patrick; Paquin, Sophie; Gauvin, Lise; Jacquemin, Christophe
2010-11-01
The purpose of this article is to describe a novel approach for understanding the subjective experience of being a pedestrian in urban settings. In so doing, we take into account the "experience of the body in movement" as described in different theories and according to different methods, and develop a tool to allow citizens and urban planners to exchange ideas about how to make cities more walkable. Finally, we present the adaptation of the approach for use in public health and provide a rationale for its more widespread use in place and health research. Copyright © 2010 Elsevier Ltd. All rights reserved.
Text Content Pushing Technology Research Based on Location and Topic
NASA Astrophysics Data System (ADS)
Wei, Dongqi; Wei, Jianxin; Wumuti, Naheman; Jiang, Baode
2016-11-01
In the field, geological workers usually want to obtain related geological background information in the working area quickly and accurately. This information exists in the massive geological data, text data is described in natural language accounted for a large proportion. This paper studied location information extracting method in the mass text data; proposed a geographic location—geological content—geological content related algorithm based on Spark and Mapreduce2, finally classified content by using KNN, and built the content pushing system based on location and topic. It is running in the geological survey cloud, and we have gained a good effect in testing by using real geological data.
Lodenstein, Elsbet; Dieleman, Marjolein; Gerretsen, Barend; Broerse, Jacqueline Ew
2013-11-07
Accountability has center stage in the current post-Millennium Development Goals (MDG) debate. One of the effective strategies for building equitable health systems and providing quality health services is the strengthening of citizen-driven or social accountability processes. The monitoring of actions and decisions of policymakers and providers by citizens is regarded as a right in itself but also as an alternative to weak administrative accountability mechanisms, in particular in settings with poor governance. The effects of social accountability interventions are often based on assumptions and are difficult to evaluate because of their complex nature and context sensitivity. This study aims to review and assess the available evidence for the effect of social accountability interventions on policymakers' and providers' responsiveness in countries with medium to low levels of governance capacity and quality. For policymakers and practitioners engaged in health system strengthening, social accountability initiatives and rights-based approaches to health, the findings of this review may help when reflecting on the assumptions and theories of change behind their policies and interventions. Little is known about social accountability interventions, their outcomes and the circumstances under which they produce outcomes for particular groups or issues. In this study, social accountability interventions are conceptualized as complex social interventions for which a realist synthesis is considered the most appropriate method of systematic review. The synthesis is based on a preliminary program theory of social accountability that will be tested through an iterative process of primary study searches, data extraction, analysis and synthesis. Published and non-published (grey) quantitative and qualitative studies in English, French and Spanish will be included. Quality and validity will be enhanced by continuous peer review and team reflection among the reviewers. The authors believe the advantages of a realist synthesis for social accountability lie in the possibility of overcoming disciplinary or paradigmatic boundaries often found in public health and development. In addition, they argue that this approach fills the knowledge gap left by conventional synthesis or evaluation exercises of participatory programs. Finally, the authors describe the practical strategies adopted to address methodological challenges and validity.
26 CFR 1.381(c)(4)-1 - Method of accounting.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 4 2012-04-01 2012-04-01 false Method of accounting. 1.381(c)(4)-1 Section 1... TAX (CONTINUED) INCOME TAXES (Continued) Insolvency Reorganizations § 1.381(c)(4)-1 Method of accounting. (a) Introduction—(1) Purpose. This section provides guidance regarding the method of accounting...
26 CFR 1.1502-17 - Methods of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Methods of accounting. 1.1502-17 Section 1.1502... (CONTINUED) INCOME TAXES Computation of Separate Taxable Income § 1.1502-17 Methods of accounting. (a) General rule. The method of accounting to be used by each member of the group shall be determined in...
31 CFR Appendix A to Part 357 - Discussion of Final Rule
Code of Federal Regulations, 2010 CFR
2010-07-01
... with the instructions given it, the Bureau intends to use its best efforts to assist investors in... as may be necessary. (b)(1) Information on deposit account at financial institution. The proposed... defeated if the recipient is not also named on the receiving deposit account. It is up to the investor to...
49 CFR 92.9 - Exceptions to notice, hearing, written response, and final decision.
Code of Federal Regulations, 2011 CFR
2011-10-01
... collection by notifying his or her accounting or finance officer; or (2) Due to a normal ministerial adjustment in pay or allowances which could not be placed into effect immediately, future pay will be reduced... accounting or finance officer. (c) Limitation on exceptions. The exceptions described in paragraph (a) of...
25 CFR 115.616 - What information will be included in BIA's final decision?
Code of Federal Regulations, 2012 CFR
2012-04-01
...? 115.616 Section 115.616 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM... applicable; (d) Any provision to allow for distributions to the account holder because of an undue financial...
25 CFR 115.616 - What information will be included in BIA's final decision?
Code of Federal Regulations, 2013 CFR
2013-04-01
...? 115.616 Section 115.616 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM... applicable; (d) Any provision to allow for distributions to the account holder because of an undue financial...
25 CFR 115.616 - What information will be included in BIA's final decision?
Code of Federal Regulations, 2010 CFR
2010-04-01
...? 115.616 Section 115.616 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM... applicable; (d) Any provision to allow for distributions to the account holder because of an undue financial...
25 CFR 115.616 - What information will be included in BIA's final decision?
Code of Federal Regulations, 2014 CFR
2014-04-01
...? 115.616 Section 115.616 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM... applicable; (d) Any provision to allow for distributions to the account holder because of an undue financial...
25 CFR 115.616 - What information will be included in BIA's final decision?
Code of Federal Regulations, 2011 CFR
2011-04-01
...? 115.616 Section 115.616 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM... applicable; (d) Any provision to allow for distributions to the account holder because of an undue financial...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
...' association concerning the definition of core deposits, which was not an element of the OTS's October 5, 2010.... generally accepted accounting principles (GAAP). The OTS received comments from a bankers' association on... breakdowns by loan category, until the FASB finalizes proposed clarifications to its standards for accounting...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... the status quo. The action is expected to maximize the profitability for the spiny dogfish fishery... possible commercial quotas by not making a deduction from the ACL accounting for management uncertainty...) in 2015; however, not accounting for management uncertainty would have increased the risk of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-09
... DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR Part 748 [Docket No. 100826397-1059-02] RIN 0694-AE98 Simplified Network Application Processing System, On-line Registration and Account Maintenance AGENCY: Bureau of Industry and Security, Commerce. ACTION: Final rule. SUMMARY: The Bureau of...
75 FR 33671 - Proposed Collection; Comment Request for Regulation Project
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-14
... Reduction Act of 1995, Public Law 104-13 (44 U.S.C. 3506(c)(2)(A)). Currently, the IRS is soliciting comments concerning an existing final regulation, REG-106917-99 (TD 8996), Changes in Accounting Periods... INFORMATION: Title: Changes in Accounting Periods. OMB Number: 1545-1748. Regulation Project Number: REG...
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
NASA Astrophysics Data System (ADS)
Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru
EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.
Manore, Carrie A; Hickmann, Kyle S; Hyman, James M; Foppa, Ivo M; Davis, Justin K; Wesson, Dawn M; Mores, Christopher N
2015-01-01
Mosquito-borne diseases cause significant public health burden and are widely re-emerging or emerging. Understanding, predicting, and mitigating the spread of mosquito-borne disease in diverse populations and geographies are ongoing modelling challenges. We propose a hybrid network-patch model for the spread of mosquito-borne pathogens that accounts for individual movement through mosquito habitats, extending the capabilities of existing agent-based models (ABMs) to include vector-borne diseases. The ABM are coupled with differential equations representing 'clouds' of mosquitoes in patches accounting for mosquito ecology. We adapted an ABM for humans using this method and investigated the importance of heterogeneity in pathogen spread, motivating the utility of models of individual behaviour. We observed that the final epidemic size is greater in patch models with a high risk patch frequently visited than in a homogeneous model. Our hybrid model quantifies the importance of the heterogeneity in the spread of mosquito-borne pathogens, guiding mitigation strategies.
A Novel DEM Approach to Simulate Block Propagation on Forested Slopes
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Dorren, Luuk; Berger, Frédéric
2018-03-01
In order to model rockfall on forested slopes, we developed a trajectory rockfall model based on the discrete element method (DEM). This model is able to take the complex mechanical processes at work during an impact into account (large deformations, complex contact conditions) and can explicitly simulate block/soil, block/tree contacts as well as contacts between neighbouring trees. In this paper, we describe the DEM model developed and we use it to assess the protective effect of different types of forest. In addition, we compared it with a more classical rockfall simulation model. The results highlight that forests can significantly reduce rockfall hazard and that the spatial structure of coppice forests has to be taken into account in rockfall simulations in order to avoid overestimating the protective role of these forest structures against rockfall hazard. In addition, the protective role of the forests is mainly influenced by the basal area. Finally, the advantages and limitations of the DEM model were compared with classical rockfall modelling approaches.
26 CFR 1.448-1 - Limitation on the use of the cash receipts and disbursements method of accounting.
Code of Federal Regulations, 2014 CFR
2014-04-01
... disbursements method of accounting. 1.448-1 Section 1.448-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Methods of Accounting § 1.448-1 Limitation on the use of the cash receipts and disbursements method of accounting. (a)-(f...
26 CFR 1.448-1 - Limitation on the use of the cash receipts and disbursements method of accounting.
Code of Federal Regulations, 2011 CFR
2011-04-01
... disbursements method of accounting. 1.448-1 Section 1.448-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Methods of Accounting § 1.448-1 Limitation on the use of the cash receipts and disbursements method of accounting. (a)-(f...
26 CFR 1.448-1 - Limitation on the use of the cash receipts and disbursements method of accounting.
Code of Federal Regulations, 2012 CFR
2012-04-01
... disbursements method of accounting. 1.448-1 Section 1.448-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Methods of Accounting § 1.448-1 Limitation on the use of the cash receipts and disbursements method of accounting. (a)-(f...
26 CFR 1.448-1 - Limitation on the use of the cash receipts and disbursements method of accounting.
Code of Federal Regulations, 2013 CFR
2013-04-01
... disbursements method of accounting. 1.448-1 Section 1.448-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Methods of Accounting § 1.448-1 Limitation on the use of the cash receipts and disbursements method of accounting. (a)-(f...
26 CFR 1.381(c)(4)-1 - Method of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Method of accounting. 1.381(c)(4)-1 Section 1... TAX (CONTINUED) INCOME TAXES Insolvency Reorganizations § 1.381(c)(4)-1 Method of accounting. (a... section 381(a) applies, an acquiring corporation shall use the same method of accounting used by the...
26 CFR 1.985-4 - Method of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Method of accounting. 1.985-4 Section 1.985-4...) INCOME TAXES Export Trade Corporations § 1.985-4 Method of accounting. (a) Adoption of election. The adoption of, or the election to use, a functional currency shall be treated as a method of accounting. The...
Hysteretic behavior using the explicit material point method
NASA Astrophysics Data System (ADS)
Sofianos, Christos D.; Koumousis, Vlasis K.
2018-05-01
The material point method (MPM) is an advancement of particle in cell method, in which Lagrangian bodies are discretized by a number of material points that hold all the properties and the state of the material. All internal variables, stress, strain, velocity, etc., which specify the current state, and are required to advance the solution, are stored in the material points. A background grid is employed to solve the governing equations by interpolating the material point data to the grid. The derived momentum conservation equations are solved at the grid nodes and information is transferred back to the material points and the background grid is reset, ready to handle the next iteration. In this work, the standard explicit MPM is extended to account for smooth elastoplastic material behavior with mixed isotropic and kinematic hardening and stiffness and strength degradation. The strains are decomposed into an elastic and an inelastic part according to the strain decomposition rule. To account for the different phases during elastic loading or unloading and smoothening the transition from the elastic to inelastic regime, two Heaviside-type functions are introduced. These act as switches and incorporate the yield function and the hardening laws to control the whole cyclic behavior. A single expression is thus established for the plastic multiplier for the whole range of stresses. This overpasses the need for a piecewise approach and a demanding bookkeeping mechanism especially when multilinear models are concerned that account for stiffness and strength degradation. The final form of the constitutive stress rate-strain rate relation incorporates the tangent modulus of elasticity, which now includes the Heaviside functions and gathers all the governing behavior, facilitating considerably the simulation of nonlinear response in the MPM framework. Numerical results are presented that validate the proposed formulation in the context of the MPM in comparison with finite element method and experimental results.
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J
2016-11-01
The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.
Inferring Time-Varying Network Topologies from Gene Expression Data
2007-01-01
Most current methods for gene regulatory network identification lead to the inference of steady-state networks, that is, networks prevalent over all times, a hypothesis which has been challenged. There has been a need to infer and represent networks in a dynamic, that is, time-varying fashion, in order to account for different cellular states affecting the interactions amongst genes. In this work, we present an approach, regime-SSM, to understand gene regulatory networks within such a dynamic setting. The approach uses a clustering method based on these underlying dynamics, followed by system identification using a state-space model for each learnt cluster—to infer a network adjacency matrix. We finally indicate our results on the mouse embryonic kidney dataset as well as the T-cell activation-based expression dataset and demonstrate conformity with reported experimental evidence. PMID:18309363
Inferring time-varying network topologies from gene expression data.
Rao, Arvind; Hero, Alfred O; States, David J; Engel, James Douglas
2007-01-01
Most current methods for gene regulatory network identification lead to the inference of steady-state networks, that is, networks prevalent over all times, a hypothesis which has been challenged. There has been a need to infer and represent networks in a dynamic, that is, time-varying fashion, in order to account for different cellular states affecting the interactions amongst genes. In this work, we present an approach, regime-SSM, to understand gene regulatory networks within such a dynamic setting. The approach uses a clustering method based on these underlying dynamics, followed by system identification using a state-space model for each learnt cluster--to infer a network adjacency matrix. We finally indicate our results on the mouse embryonic kidney dataset as well as the T-cell activation-based expression dataset and demonstrate conformity with reported experimental evidence.
NASA Technical Reports Server (NTRS)
Stock, Thomas A.
1995-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intraply level, and the related effects of these on composite properties.
Computer-assisted uncertainty assessment of k0-NAA measurement results
NASA Astrophysics Data System (ADS)
Bučar, T.; Smodiš, B.
2008-10-01
In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.
Self-efficacy is independently associated with brain volume in older women.
Davis, Jennifer C; Nagamatsu, Lindsay S; Hsu, Chun Liang; Beattie, B Lynn; Liu-Ambrose, Teresa
2012-07-01
ageing is highly associated with neurodegeneration and atrophy of the brain. Evidence suggests that personality variables are risk factors for reduced brain volume. We examine whether falls-related self-efficacy is independently associated with brain volume. a cross-sectional analysis of whether falls-related self-efficacy is independently associated with brain volumes (total, grey and white matter). Three multivariate regression models were constructed. Covariates included in the models were age, global cognition, systolic blood pressure, functional comorbidity index and current physical activity level. MRI scans were acquired from 79 community-dwelling senior women aged 65-75 years old. Falls-related self-efficacy was assessed by the activities-specific balance confidence (ABC) scale. after accounting for covariates, falls-related self-efficacy was independently associated with both total brain volume and total grey matter volume. The final model for total brain volume accounted for 17% of the variance, with the ABC score accounting for 8%. For total grey matter volume, the final model accounted for 24% of the variance, with the ABC score accounting for 10%. we provide novel evidence that falls-related self-efficacy, a modifiable risk factor for healthy ageing, is positively associated with total brain volume and total grey matter volume. ClinicalTrials.gov Identifier: NCT00426881.
THE DEVELOPMENT AND PRESENTATION OF FOUR COLLEGE COURSES BY COMPUTER TELEPROCESSING. FINAL REPORT.
ERIC Educational Resources Information Center
MITZEL, HAROLD E.
THIS IS A FINAL REPORT ON THE DEVELOPMENT AND PRESENTATION OF FOUR COLLEGE COURSES BY COMPUTER TELEPROCESSING FROM APRIL 1964 TO JUNE 1967. IT OUTLINES THE PROGRESS MADE TOWARDS THE PREPARATION, DEVELOPMENT, AND EVALUATION OF MATERIALS FOR COMPUTER PRESENTATION OF COURSES IN AUDIOLOGY, MANAGEMENT ACCOUNTING, ENGINEERING ECONOMICS, AND MODERN…
1986-04-01
In this final rule we are adopting an apportionment methodology for determining reasonable cost reimbursement for hospital malpractice insurance costs. The new apportionment policy for hospitals will divide total malpractice insurance premium cost into two components. The "administrative component," which accounts for 8.5 percent of total premium cost, will be included in the General and Administrative cost center and will be apportioned on the basis of the individual hospital's Medicare utilization rate. The "risk component," which comprises 91.5 percent of total cost, will be apportioned on the basis of a formula that takes into account the individual hospital's utilization as well as the national Medicare patient utilization rate and the national Medicare malpractice loss ratio (as adjusted to account for associated claims handling costs). Effectively, the "scaling factor formula" will relate the national utilization rate to the adjusted national loss ratio. As a hospital's own utilization rate exceeds or falls below the national utilization rate, the risk component will be reimbursed on the basis of a "scaling factor" that is more or less than the national Medicare malpractice loss ratio. Different apportionment policies are being adopted for Medicare skilled nursing facilities and for providers of services under the Medicaid and Maternal and Child Health programs. This final rule replaces our current apportionment policy for reimbursement of malpractice insurance costs and is applicable, subject to the rules of reopening and administrative finality, to cost reporting periods beginning on or after July 1, 1979.
Rana, Gianfranco; Katerji, Nader; Mastrorilli, Marcello
2012-10-01
The present study describes an operational method, based on the Katerji et al. (Eur J Agron 33:218-230, 2010) model, for determining the daily evapotranspiration (ET) for soybean inside open top chambers (OTCs). It includes two functions, calculated day par day, making it possible to separately take into account the effects of concentrations of air ozone and plant water stress. This last function was calibrated in function of the daily values of actual water reserve in the soil. The input variables of the method are (a) the diurnal values of global radiation and temperature, usually measured routinely in a standard weather station; (b) the daily values of the AOT40 index accumulated (accumulated ozone over a threshold of 40 ppb during daylight hours, when global radiation exceeds 50 Wm(-2)) determined inside the OTC; and (c) the actual water reserve in the soil, at the beginning of the trial. The ensemble of these input variables can be automatable; thus, the proposed method could be applied in routine. The ability of the method to take into account contrasting conditions of ozone air concentration and water stress was evaluated over three successive years, for 513 days, in ten crop growth cycles, excluding the days employed to calibrate the method. Tests were carried out in several chambers for each year and take into account the intra- and inter-year variability of ET measured inside the OTCs. On the daily scale, the slope of the linear regression between the ET measured by the soil water balance and that calculated by the proposed method, under different water conditions, are 0.98 and 1.05 for the filtered and unfiltered (or enriched) OTCs with root mean square error (RMSE) equal to 0.77 and 1.07 mm, respectively. On the seasonal scale, the mean difference between measured and calculated ET is equal to +5% and +11% for the filtered and unfiltered OTCs, respectively. The ability of the proposed method to estimate the daily and seasonal ET inside the OTCs is therefore satisfactory following inter- and intra-annual tests. Finally, suggestions about the applications of the proposed method for other species, different from soybean, were also discussed.
26 CFR 1.448-1 - Limitation on the use of the cash receipts and disbursements method of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... disbursements method of accounting. 1.448-1 Section 1.448-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.448-1 Limitation on the use of the cash receipts and disbursements method of accounting. (a)-(f) [Reserved] (g...
Social network utilization (Facebook) & e-Professionalism among medical students
Jawaid, Masood; Khan, Muhammad Hassaan; Bhutto, Shahzadi Nisar
2015-01-01
Objective: To find out the frequency and contents of online social networking (Facebook) among medical students of Dow University of Health Sciences. Methods: The sample of the study comprised of final year students of two medical colleges of Dow University of Health Sciences – Karachi. Systematic search for the face book profiles of the students was carried out with a new Facebook account. In the initial phase of search, it was determined whether each student had a Facebook account and the status of account as ‘‘private’’ ‘‘intermediate’’ or ‘‘public’’ was also sought. In the second phase of the study, objective information including gender, education, personal views, likes, tag pictures etc. were recorded for the publicly available accounts. An in depth qualitative content analysis of the public profiles of ten medical students, selected randomly with the help of random number generator technique was conducted. Results: Social networking with Facebook is common among medical students with 66.9% having an account out of a total 535 students. One fifth of profiles 18.9% were publicly open, 36.6% profiles were private and 56.9% were identified to have an intermediate privacy setting, having customized settings for the profile information. In-depth analysis of some public profiles showed that potentially unprofessional material mostly related to violence and politics was posted by medical students. Conclusion: The usage of social network (Facebook) is very common among students of the university. Some unprofessional posts were also found on students’ profiles mostly related to violence and politics. PMID:25878645
Exploring accountability of clinical ethics consultants: practice and training implications.
Weise, Kathryn L; Daly, Barbara J
2014-01-01
Clinical ethics consultants represent a multidisciplinary group of scholars and practitioners with varied training backgrounds, who are integrated into a medical environment to assist in the provision of ethically supportable care. Little has been written about the degree to which such consultants are accountable for the patient care outcome of the advice given. We propose a model for examining degrees of internally motivated accountability that range from restricted to unbounded accountability, and support balanced accountability as a goal for practice. Finally, we explore implications of this model for training of clinical ethics consultants from diverse academic backgrounds, including those disciplines that do not have a formal code of ethics relating to clinical practice.
Rahmani, Azam; Merghati-Khoei, Effat; Moghadam-Banaem, Lida; Hajizadeh, Ebrahim; Hamdieh, Mostafa; Montazeri, Ali
2014-06-13
Premarital sexual behaviors are important issue for women's health. The present study was designed to develop and examine the psychometric properties of a scale in order to identify young women who are at greater risk of premarital sexual behavior. This was an exploratory mixed method investigation. Indeed, the study was conducted in two phases. In the first phase, qualitative methods (focus group discussion and individual interview) were applied to generate items and develop the questionnaire. In the second phase, psychometric properties (validity and reliability) of the questionnaire were assessed. In the first phase an item pool containing 53 statements related to premarital sexual behavior was generated. In the second phase item reduction was applied and the final version of the questionnaire containing 26 items was developed. The psychometric properties of this final version were assessed and the results showed that the instrument has a good structure, and reliability. The results from exploratory factory analysis indicated a 5-factor solution for the instrument that jointly accounted for the 57.4% of variance observed. The Cronbach's alpha coefficient for the instrument was found to be 0.87. This study provided a valid and reliable scale to identify premarital sexual behavior in young women. Assessment of premarital sexual behavior might help to improve women's sexual abstinence.
NASA Astrophysics Data System (ADS)
Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine
2016-04-01
Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The probability to obtain a safety factor below 1 represents the probability of occurrence of a landslide for a given triggering event. The dispersion of the distribution gives the uncertainty of the result. Finally, a map is created, displaying a probability of occurrence for each computing cell of the studied area. In order to take into account the land-uses change, a complementary module integrating the vegetation effects on soil properties has been recently developed. Last years, the model has been applied at different scales for different geomorphological environments: (i) at regional scale (1:50,000-1:25,000) in French West Indies and French Polynesian islands (ii) at local scale (i.e.1:10,000) for two complex mountainous areas; (iii) at the site-specific scale (1:2,000) for one landslide. For each study the 3D geotechnical model has been adapted. The different studies have allowed : (i) to discuss the different factors included in the model especially the initial 3D geotechnical models; (ii) to precise the location of probable failure following different hydrological scenarii; (iii) to test the effects of climatic change and land-use on slopes for two cases. In that way, future changes in temperature, precipitation and vegetation cover can be analyzed, permitting to address the impacts of global change on landslides. Finally, results show that it is possible to obtain reliable information about future slope failures at different scale of work for different scenarii with an integrated approach. The final information about landslide susceptibility (i.e. probability of failure) can be integrated in landslide hazard assessment and could be an essential information source for future land planning. As it has been performed in the ANR Project SAMCO (Society Adaptation for coping with Mountain risks in a global change COntext), this analysis constitutes a first step in the chain for risk assessment for different climate and economical development scenarios, to evaluate the resilience of mountainous areas.
A Human-Centered C2 Assessment of Model and Simulation Enhanced Planning Tools
2014-06-01
al., 1985) and accounted for 55% of the variance in students’ final course points (Goldsmith and Johnson, 1990). An Afghanistan expert knowledge...elements of power. Q9. The plan contains more than one branch to account for multiple theories and ambiguity in some data. Q3. COMPOEX allowed the...The plan contains more than one branch to account for multiple theories and ambiguity in some data. Q15. The MRMs are at an appropriate level to
3D multiscale crack propagation using the XFEM applied to a gas turbine blade
NASA Astrophysics Data System (ADS)
Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund
2014-01-01
This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.
Creating the electric energy mix of a non-connected Aegean island
NASA Astrophysics Data System (ADS)
Stamou, Paraskevi; Karali, Sophia; Chalakatevaki, Maria; Daniil, Vasiliki; Tzouka, Katerina; Dimitriadis, Panayiotis; Iliopoulou, Theano; Papanicolaou, Panos; Koutsoyiannis, Demetris; Mamasis, Nikos
2017-04-01
As the electric energy in the non-connected islands is mainly produced by oil-fueled power plants, the unit cost is extremely high. Here the various energy sources are examined in order to create the appropriate electric energy mix for a non-connected Aegean island. All energy sources (renewable and fossil fuels) are examined and each one is evaluated using technical, environmental and economic criteria. Finally the most appropriate energy sources are simulated considering the corresponding energy works. Special emphasis is given to the use of biomass and the possibility of replacing (even partially) the existing oil-fueled power plant. Finally, a synthesis of various energy sources is presented that satisfies the electric energy demand taking into account the base and peak electric loads of the island. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
DOT National Transportation Integrated Search
2016-10-01
Rural roads account for 90.3% of the 140,476 total centerline miles of roadways in Kansas. In recent years, rural fatal crashes have accounted for about 66% of all fatal crashes. The Highway Safety Manual (HSM) provides models and methodologies for a...
ERIC Educational Resources Information Center
Young (Arthur) and Co., Washington, DC.
Several years ago, Montgomery County Public Schools (MCPS) began a Management Operations Review and Evaluation (MORE) of the entire school system, excluding school-based instruction. This MORE study is an evaluation of MCPS's current accounting system and certain related financial services functions within the Department of Financial Services. In…
Discrete stochastic simulation methods for chemically reacting systems.
Cao, Yang; Samuels, David C
2009-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-07-17
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-01-01
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283
a Landmark Extraction Method Associated with Geometric Features and Location Distribution
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.
2018-04-01
Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.
Reed, H; Stanton, A; Wheat, J; Kelley, J; Davis, L; Rao, W; Smith, A; Owen, D; Francese, S
2016-01-01
In the search for better or new methods/techniques to visualise fingermarks or to analyse them exploiting their chemical content, fingermarks inter-variability may hinder the assessment of the method effectiveness. Variability is due to changes in the chemical composition of the fingermarks between different donors and within the same donor, as well as to differential contact time, pressure and angle. When validating a method or comparing it with existing ones, it is not always possible to account for this type of variability. One way to compensate for these issues is to employ, in the early stages of the method development, a device generating reproducible fingermarks. Here the authors present their take on such device, as well as quantitatively describing its performance and benefits against the manual production of marks. Finally a short application is illustrated for the use of this device, at the method developmental stages, in an emerging area of fingerprinting research concerning the retrieval of chemical intelligence from fingermarks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Trends in Accounting Education: Decreasing Accounting Anxiety and Promoting New Methods
ERIC Educational Resources Information Center
Buckhaults, Jessica; Fisher, Diane
2011-01-01
In this paper, authors (a) identified accounting anxiety for the educator and the student as a possible explanation for the decline in accounting education and (b) investigated new methods for teaching accounting at the secondary and postsecondary levels that will increase interest in accounting education as well as decrease educator and student…
Electronic properties of doped and defective NiO: A quantum Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Hyeondeok; Luo, Ye; Ganesh, Panchapakesan
NiO is a canonical Mott (or charge-transfer) insulator and as such is notoriously difficult to describe using density functional theory (DFT) based electronic structure methods. Doped Mott insulators such as NiO are of interest for various applications but rigorous theoretical descriptions are lacking. Here, we use quantum Monte Carlo methods, which very accurately include electron-electron interactions, to examine energetics, charge- and spin-structures of NiO with various point defects, such as vacancies or substitutional doping with potassium. The formation energy of a potassium dopant is significantly lower than for a Ni vacancy, making potassium an attractive monovalent dopant for NiO. Wemore » compare our results with DFT results that include an on-site Hubbard U (DFT+U) to account for correlations and find relatively large discrepancies for defect formation energies as well as for charge and spin redistributions in the presence of point defects. Finally, it is unlikely that single-parameter fixes of DFT may be able to obtain accurate accounts of anything but a single parameter, e.g., band gap; responses that, maybe in addition to the band gap, depend in subtle and complex ways on ground state properties, such as charge and spin densities, are likely to contain quantitative and qualitative errors.« less
Correcting for population structure and kinship using the linear mixed model: theory and extensions.
Hoffman, Gabriel E
2013-01-01
Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.
Sharp, J R
1994-12-01
Drucker writes that the emerging theory of manufacturing includes four principles and practices: statistical quality control, manufacturing accounting, modular organization, and systems approach. SQC is a rigorous, scientific method of identifying variation in the quality and productivity of a given production process, with an emphasis on improvement. The new manufacturing economics intends to integrate the production strategy with the business strategy in order to account for the biggest portions of costs that the old methods did not assess: time and automation. Production operations that are both standardized and flexible will allow the organization to keep up with changes in design, technology, and the market. The return on innovation in this environment is predicated on a modular arrangement of flexible steps in the process. Finally, the systems approach sees the entire process as being integrated in converting goods or services into economic satisfaction. There is now a major restructuring of the U.S. health care industry, and the incorporation of these four theories into health care reform would appear to be essential. This two-part article will address two problems: Will Drucker's theories relate to health care (Part I)? Will the "new manufacturing" in health care (practice guidelines) demonstrate cost, quality, and access changes that reform demands (Part II)?
Electronic properties of doped and defective NiO: A quantum Monte Carlo study
Shin, Hyeondeok; Luo, Ye; Ganesh, Panchapakesan; ...
2017-12-28
NiO is a canonical Mott (or charge-transfer) insulator and as such is notoriously difficult to describe using density functional theory (DFT) based electronic structure methods. Doped Mott insulators such as NiO are of interest for various applications but rigorous theoretical descriptions are lacking. Here, we use quantum Monte Carlo methods, which very accurately include electron-electron interactions, to examine energetics, charge- and spin-structures of NiO with various point defects, such as vacancies or substitutional doping with potassium. The formation energy of a potassium dopant is significantly lower than for a Ni vacancy, making potassium an attractive monovalent dopant for NiO. Wemore » compare our results with DFT results that include an on-site Hubbard U (DFT+U) to account for correlations and find relatively large discrepancies for defect formation energies as well as for charge and spin redistributions in the presence of point defects. Finally, it is unlikely that single-parameter fixes of DFT may be able to obtain accurate accounts of anything but a single parameter, e.g., band gap; responses that, maybe in addition to the band gap, depend in subtle and complex ways on ground state properties, such as charge and spin densities, are likely to contain quantitative and qualitative errors.« less
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
25 CFR 39.411 - How will the auditor report its findings?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How will the auditor report its findings? 39.411 Section... EQUALIZATION PROGRAM Accountability § 39.411 How will the auditor report its findings? (a) The auditor selected... to the findings, where submitted, in the final audit report. (b) The auditor must submit a final...
Final Part-Word Repetitions in School-Age Children: Two Case Studies
ERIC Educational Resources Information Center
McAllister, Jan; Kingston, Mary
2005-01-01
In contrast to the many published accounts of the disfluent repetition of sounds at the beginnings of words, cases where it is predominantly the final parts of words that are repeated have been reported relatively rarely. With few exceptions, those studies that have been published have described either pre-school children or neurologically…
25 CFR 115.615 - How long after the hearing will BIA make its final decision?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false How long after the hearing will BIA make its final decision? 115.615 Section 115.615 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM...
25 CFR 115.615 - How long after the hearing will BIA make its final decision?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How long after the hearing will BIA make its final decision? 115.615 Section 115.615 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM...
25 CFR 115.615 - How long after the hearing will BIA make its final decision?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false How long after the hearing will BIA make its final decision? 115.615 Section 115.615 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM...
25 CFR 115.615 - How long after the hearing will BIA make its final decision?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true How long after the hearing will BIA make its final decision? 115.615 Section 115.615 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM...
25 CFR 115.615 - How long after the hearing will BIA make its final decision?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false How long after the hearing will BIA make its final decision? 115.615 Section 115.615 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS IIM Accounts: Hearing Process for Restricting an IIM...
25 CFR 39.411 - How will the auditor report its findings?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false How will the auditor report its findings? 39.411 Section... EQUALIZATION PROGRAM Accountability § 39.411 How will the auditor report its findings? (a) The auditor selected... to the findings, where submitted, in the final audit report. (b) The auditor must submit a final...
FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Sham, T. -L.
This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less
ERIC Educational Resources Information Center
D'Amico, Ronald; Martinez, Alexandria; Salzman, Jeffrey; Wagner, Robin
In March 2000, thirteen grants were awarded as part of the Individual Training Account/Eligible Training Provider (ITA/ETP) Demonstration. In summer and fall of 2000, the grant recipients' activities were subjected to an interim evaluation. Site visits were made to each grantee to determine what ITA policies and practices were being formulated,…
One-Dimensional Modelling of Internal Ballistics
NASA Astrophysics Data System (ADS)
Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.
2017-10-01
A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.
NASA Astrophysics Data System (ADS)
Nick, Arash Safavi; Vynnycky, Michael; Fredriksson, Hasse
2016-06-01
A mathematical model is derived to predict the trajectories of pores and inclusions that are nucleated in the interdendritic region during the continuous casting of steel. Using basic fluid mechanics and heat transfer, scaling analysis, and asymptotic methods, the model accounts for the possible lateral drift of the pores as a result of the dependence of the surface tension on temperature and sulfur concentration. Moreover, the soluto-thermocapillary drift of such pores prior to final solidification, coupled to the fact that any inclusions present can only have a vertical trajectory, can help interpret recent experimental observations of pore-inclusion clusters in solidified steel castings.
Radon Mitigation Approach in a Laboratory Measurement Room
Blanco-Rodríguez, Patricia; Fernández-Serantes, Luis Alfonso; Otero-Pazos, Alberto; Calvo-Rolle, José Luis; de Cos Juez, Francisco Javier
2017-01-01
Radon gas is the second leading cause of lung cancer, causing thousands of deaths annually. It can be a problem for people or animals in houses, workplaces, schools or any building. Therefore, its mitigation has become essential to avoid health problems and to prevent radon from interfering in radioactive measurements. This study describes the implementation of radon mitigation systems at a radioactivity laboratory in order to reduce interferences in the different works carried out. A large set of radon concentration samples is obtained from measurements at the laboratory. While several mitigation methods were taken into account, the final applied solution is explained in detail, obtaining thus very good results by reducing the radon concentration by 76%. PMID:28492468
Radon Mitigation Approach in a Laboratory Measurement Room.
Blanco-Rodríguez, Patricia; Fernández-Serantes, Luis Alfonso; Otero-Pazos, Alberto; Calvo-Rolle, José Luis; de Cos Juez, Francisco Javier
2017-05-11
Radon gas is the second leading cause of lung cancer, causing thousands of deaths annually. It can be a problem for people or animals in houses, workplaces, schools or any building. Therefore, its mitigation has become essential to avoid health problems and to prevent radon from interfering in radioactive measurements. This study describes the implementation of radon mitigation systems at a radioactivity laboratory in order to reduce interferences in the different works carried out. A large set of radon concentration samples is obtained from measurements at the laboratory. While several mitigation methods were taken into account, the final applied solution is explained in detail, obtaining thus very good results by reducing the radon concentration by 76%.
Accounting for selection bias in association studies with complex survey data.
Wirth, Kathleen E; Tchetgen Tchetgen, Eric J
2014-05-01
Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.
NASA Astrophysics Data System (ADS)
Nespoli, Massimo; Belardinelli, Maria E.; Anderlini, Letizia; Bonafede, Maurizio; Pezzo, Giuseppe; Todesco, Micol; Rinaldi, Antonio P.
2017-12-01
The 2012 Emilia Romagna (Italy) seismic sequence has been extensively studied given the occurrence of two mainshocks, both temporally and spatially close to each other. The recent literature accounts for several fault models, obtained with different inversion methods and different datasets. Several authors investigated the possibility that the second event was triggered by the first mainshock with elusive results. In this work, we consider all the available InSAR and GPS datasets and two planar fault geometries, which are based on both seismological and geological constraints. We account for a layered, elastic half-space hosting the dislocation and compare the slip distribution resulting from the inversion and the related changes in Coulomb Failure Function (CFF) obtained with both a homogeneous and layered half-space. Finally, we focus on the interaction between the two main events, discriminating the contributions of coseismic and early postseismic slip of the mainshock on the generation of the second event and discuss the spatio-temporal distribution of the seismic sequence. When accounting for both InSAR and GPS geodetic data we are able to reproduce a detailed coseismic slip distribution for the two mainshocks that is in accordance with the overall aftershock seismicity distribution. Furthermore, we see that an elastic medium with depth dependent rigidity better accounts for the lack of the shallow seismicity, amplifying, with respect to the homogeneous case, the mechanical interaction of the two mainshocks.
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2015-08-01
The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao; Wan, Liang; Chen, Kai
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
NASA Astrophysics Data System (ADS)
Tabrizi, Babak H.; Ghaderi, Seyed Farid
2016-09-01
Simultaneous planning of project scheduling and material procurement can improve the project execution costs. Hence, the issue has been addressed here by a mixed-integer programming model. The proposed model facilitates the procurement decisions by accounting for a number of suppliers offering a distinctive discount formula from which to purchase the required materials. It is aimed at developing schedules with the best net present value regarding the obtained benefit and costs of the project execution. A genetic algorithm is applied to deal with the problem, in addition to a modified version equipped with a variable neighbourhood search. The underlying factors of the solution methods are calibrated by the Taguchi method to obtain robust solutions. The performance of the aforementioned methods is compared for different problem sizes, in which the utilized local search proved efficient. Finally, a sensitivity analysis is carried out to check the effect of inflation on the objective function value.
A line transect model for aerial surveys
Quang, Pham Xuan; Lanctot, Richard B.
1991-01-01
We employ a line transect method to estimate the density of the common and Pacific loon in the Yukon Flats National Wildlife Refuge from aerial survey data. Line transect methods have the advantage of automatically taking into account “visibility bias” due to detectability difference of animals at different distances from the transect line. However, line transect methods must overcome two difficulties when applied to inaccurate recording of sighting distances due to high travel speeds, so that in fact only a few reliable distance class counts are available. We propose a unimodal detection function that provides an estimate of the effective area lost due to the blind strip, under the assumption that a line of perfect detection exists parallel to the transect line. The unimodal detection function can also be applied when a blind strip is absent, and in certain instances when the maximum probability of detection is less than 100%. A simple bootstrap procedure to estimate standard error is illustrated. Finally, we present results from a small set of Monte Carlo experiments.
Li, Yao; Wan, Liang; Chen, Kai
2015-04-25
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Systematizing the production of environmental plans: an Australian example
NASA Astrophysics Data System (ADS)
Davis, J. Richard
1985-09-01
Environmental planning legislation in New South Wales now requires local government authorities to draw up statutory plans that take into account, among other concerns, both the biophysical and the social environmental issues within their jurisdictions. The SIRO-PLAN method of plan production provides a systematic mechanism for fulfilling this requirement. This article describes the application of the method by planning researchers over 18 months to the production of a Local Environmental Plan for a rural local government in New South Wales. The policy formulation, the purposive data collection, and the deliberate adjustment of plans in order to recognize interest group requirements were all found to be valuable features of the method, while the translation of the ultimately chosen land-use plan into the explicit regulatory controls available to the local government authority was found to require further refinement. The capacity of SIRO-PLAN to quantify the resolution of competing environmental concerns in the final plan, although of value to planning researchers, proved too arcane for traditionally trained planners.
The method of selecting an integrated development territory for the high-rise unique constructions
NASA Astrophysics Data System (ADS)
Sheina, Svetlana; Shevtsova, Elina; Sukhinin, Alexander; Priss, Elena
2018-03-01
On the basis of data provided by the Department of architecture and urban planning of the city of Rostov-on-don, the problem of the choice of the territory for complex development that will be in priority for the construction of high-rise and unique buildings is solved. The objective of the study was the development of a methodology for selection of the area and the implementation of the proposed method on the example of evaluation of four-territories complex development. The developed method along with standard indicators of complex evaluation considers additional indicators that assess the territory from the position of high-rise unique building. The final result of the study is the rankings of the functional priority areas that takes into account the construction of both residential and public and business objects of unique high-rise construction. The use of the developed methodology will allow investors and customers to assess the investment attractiveness of the future unique construction project on the proposed site.
NASA Astrophysics Data System (ADS)
Schießl, Stefan P.; Rother, Marcel; Lüttgens, Jan; Zaumseil, Jana
2017-11-01
The field-effect mobility is an important figure of merit for semiconductors such as random networks of single-walled carbon nanotubes (SWNTs). However, owing to their network properties and quantum capacitance, the standard models for field-effect transistors cannot be applied without modifications. Several different methods are used to determine the mobility with often very different results. We fabricated and characterized field-effect transistors with different polymer-sorted, semiconducting SWNT network densities ranging from low (≈6 μm-1) to densely packed quasi-monolayers (≈26 μm-1) with a maximum on-conductance of 0.24 μS μm-1 and compared four different techniques to evaluate the field-effect mobility. We demonstrate the limits and requirements for each method with regard to device layout and carrier accumulation. We find that techniques that take into account the measured capacitance on the active device give the most reliable mobility values. Finally, we compare our experimental results to a random-resistor-network model.
78 FR 25818 - Truth in Lending (Regulation Z)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
...The Bureau of Consumer Financial Protection (Bureau) issues this final rule to amend Regulation Z, which implements the Truth in Lending Act (TILA), and the official interpretations to the regulation. Regulation Z generally prohibits a card issuer from opening a credit card account for a consumer, or increasing the credit limit applicable to a credit card account, unless the card issuer considers the consumer's ability to make the required payments under the terms of such account. Regulation Z currently requires that issuers consider the consumer's independent ability to pay, regardless of the consumer's age; in contrast, TILA expressly requires consideration of an independent ability to pay only for applicants who are under the age of 21. The final rule amends Regulation Z to remove the requirement that issuers consider the consumer's independent ability to pay for applicants who are 21 or older, and permits issuers to consider income and assets to which such consumers have a reasonable expectation of access.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [TD 9534] RIN 1545-BD81 Methods... regulations relating to the methods of accounting, including the inventory methods, to be used by corporations... liquidations. These regulations clarify and simplify the rules regarding the accounting methods to be used...
Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A
2014-09-01
Spatially referenced datasets arising from multiple sources are routinely combined to assess relationships among various outcomes and covariates. The geographical units associated with the data, such as the geographical coordinates or areal-level administrative units, are often spatially misaligned, that is, observed at different locations or aggregated over different geographical units. As a result, the covariate is often predicted at the locations where the response is observed. The method used to align disparate datasets must be accounted for when subsequently modeling the aligned data. Here we consider the case where kriging is used to align datasets in point-to-point and point-to-areal misalignment problems when the response variable is non-normally distributed. If the relationship is modeled using generalized linear models, the additional uncertainty induced from using the kriging mean as a covariate introduces a Berkson error structure. In this article, we develop a pseudo-penalized quasi-likelihood algorithm to account for the additional uncertainty when estimating regression parameters and associated measures of uncertainty. The method is applied to a point-to-point example assessing the relationship between low-birth weights and PM2.5 levels after the onset of the largest wildfire in Florida history, the Bugaboo scrub fire. A point-to-areal misalignment problem is presented where the relationship between asthma events in Florida's counties and PM2.5 levels after the onset of the fire is assessed. Finally, the method is evaluated using a simulation study. Our results indicate the method performs well in terms of coverage for 95% confidence intervals and naive methods that ignore the additional uncertainty tend to underestimate the variability associated with parameter estimates. The underestimation is most profound in Poisson regression models. © 2014, The International Biometric Society.
199 Development of a National Guideline on Skin Testing and Immunotherapy
Linnemann, Désirée Larenas; Ortega Martell, José Antonio; del Rio, Blanca; Rodriguez-Perez, Noel; Arias-Cruz, Alfredo; Estrada, Alan
2012-01-01
Background Several international guidelines exist on allergen immunotherapy (AIT) –eg American, European, British, Spanish, Italian- but local conditions that reign in each country limit their applicability. We present the steps we followed to develop a National Guideline on AIT, taking into account local legislation, extracts available, costs and patient preference. Methods Firstly a Nation-wide survey on the practice of skin testing and AIT was undertaken among all members of Mexican Allergist Societies. Secondly, based on the replies obtained with the survey clinical questions were formulated on critical points and issues susceptible for improvement, as diagnosed by the survey. Thirdly, all 6 Regional Allergist Societies were visited to obtain the opinion of their members on the clinical questions concerning how immunotherapy could best be practiced under local Mexican conditions. This led to the Consensed experience. Fourthly, 6 experts looked for the replies to the clinical questions reviewing the literature and assigning quality of evidence to the articles on the specific issues treated by each clinical question. Results To develop the final document the GRADE approach was used. For each clinical question both, knowledge from the local consensed experience and the evidence-based replies were taken into account, as well as cost, patient preference and safety to make a set of recommendations and suggestions on the most crucial aspects of skin testing and AIT. Forming centers of allergists in Mexico corrected the final draft. The final document came out as the January issue of Revista Mexicana Alergia and was presented by the authors in a National Course on Immunotherapy (May 2011), with—apart from the lectures—a more workshop-like part to allow for practical exercising and discussion. The updated questions on allergen immunotherapy for the final board exam are based on the Guideline. Allergy-residents developed a slide-show. In 2012 Regional Allergist Societies shall be visited again. Conclusions We present a democratic way of how a National Guideline can be developed, supported by evidence-based medicine and local experience in a country where little is legislated on this respect and quality improvement has to be stimulated by the professional community. We show how implementation can be enhanced.
NASA Technical Reports Server (NTRS)
Medelius, Petro; Jolley, Scott; Fitzpatrick, Lilliana; Vinje, Rubiela; Williams, Martha; Clayton, LaNetra; Roberson, Luke; Smith, Trent; Santiago-Maldonado, Edgardo
2007-01-01
Wiring is a major operational component on aerospace hardware that accounts for substantial weight and volumetric space. Over time wire insulation can age and fail, often leading to catastrophic events such as system failure or fire. The next generation of wiring must be reliable and sustainable over long periods of time. These features will be achieved by the development of a wire insulation capable of autonomous self-healing that mitigates failure before it reaches a catastrophic level. In order to develop a self-healing insulation material, three steps must occur. First, methods of bonding similar materials must be developed that are capable of being initiated autonomously. This process will lead to the development of a manual repair system for polyimide wire insulation. Second, ways to initiate these bonding methods that lead to materials that are similar to the primary insulation must be developed. Finally, steps one and two must be integrated to produce a material that has no residues from the process that degrades the insulating properties of the final repaired insulation. The self-healing technology, teamed with the ability to identify and locate damage, will greatly improve reliability and safety of electrical wiring of critical systems. This paper will address these topics, discuss the results of preliminary testing, and remaining development issues related to self-healing wire insulation.
Regional Scale Simulations of Nitrate Leaching through Agricultural Soils of California
NASA Astrophysics Data System (ADS)
Diamantopoulos, E.; Walkinshaw, M.; O'Geen, A. T.; Harter, T.
2016-12-01
Nitrate is recognized as one of California's most widespread groundwater contaminants. As opposed to point sources, which are relative easily identifiable sources of contamination, non-point sources of nitrate are diffuse and linked with widespread use of fertilizers in agricultural soils. California's agricultural regions have an incredible diversity of soils that encompass a huge range of properties. This complicates studies dealing with nitrate risk assessment, since important biological and physicochemical processes appear at the first meters of the vadose zone. The objective of this study is to evaluate all agricultural soils in California according to their potentiality for nitrate leaching based on numerical simulations using the Richards equation. We conducted simulations for 6000 unique soil profiles (over 22000 soil horizons) taking into account the effect of climate, crop type, irrigation and fertilization management scenarios. The final goal of this study is to evaluate simple management methods in terms of reduced nitrate leaching. We estimated drainage rates of water under the root zone and nitrate concentrations in the drain water at the regional scale. We present maps for all agricultural soils in California which can be used for risk assessment studies. Finally, our results indicate that adoption of simple irrigation and fertilization methods may significantly reduce nitrate leaching in vulnerable regions.
Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements.
Lewy, Serge
2005-02-01
In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
2014-01-01
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
Simulation of the Beating Heart Based on Physically Modeling aDeformable Balloon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.
2006-07-18
The motion of the beating heart is complex and createsartifacts in SPECT and x-ray CT images. Phantoms such as the JaszczakDynamic Cardiac Phantom are used to simulate cardiac motion forevaluationof acquisition and data processing protocols used for cardiacimaging. Two concentric elastic membranes filled with water are connectedto tubing and pump apparatus for creating fluid flow in and out of theinner volume to simulate motion of the heart. In the present report, themovement of two concentric balloons is solved numerically in order tocreate a computer simulation of the motion of the moving membranes in theJaszczak Dynamic Cardiac Phantom. A system ofmore » differential equations,based on the physical properties, determine the motion. Two methods aretested for solving the system of differential equations. The results ofboth methods are similar providing a final shape that does not convergeto a trivial circular profile. Finally,a tomographic imaging simulationis performed by acquiring static projections of the moving shape andreconstructing the result to observe motion artifacts. Two cases aretaken into account: in one case each projection angle is sampled for ashort time interval and the other case is sampled for a longer timeinterval. The longer sampling acquisition shows a clear improvement indecreasing the tomographic streaking artifacts.« less
[Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].
Golder, W
1999-09-01
To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.
Qiu, Zuo-Cheng; Dong, Xiao-Li; Dai, Yi; Xiao, Gao-Keng; Wang, Xin-Luan; Wong, Ka-Chun; Wong, Man-Sau; Yao, Xin-Sheng
2016-01-01
Rhizoma Drynariae (RD), as one of the most common clinically used folk medicines, has been reported to exert potent anti-osteoporotic activity. The bioactive ingredients and mechanisms that account for its bone protective effects are under active investigation. Here we adopt a novel in silico target fishing method to reveal the target profile of RD. Cathepsin K (Ctsk) is one of the cysteine proteases that is over-expressed in osteoclasts and accounts for the increase in bone resorption in metabolic bone disorders such as postmenopausal osteoporosis. It has been the focus of target based drug discovery in recent years. We have identified two components in RD, Kushennol F and Sophoraflavanone G, that can potentially interact with Ctsk. Biological studies were performed to verify the effects of these compounds on Ctsk and its related bone resorption process, which include the use of in vitro fluorescence-based Ctsk enzyme assay, bone resorption pit formation assay, as well as Receptor Activator of Nuclear factor κB (NF-κB) ligand (RANKL)-induced osteoclastogenesis using murine RAW264.7 cells. Finally, the binding mode and stability of these two compounds that interact with Ctsk were determined by molecular docking and dynamics methods. The results showed that the in silico target fishing method could successfully identify two components from RD that show inhibitory effects on the bone resorption process related to protease Ctsk. PMID:27999266
A mathematical programming approach for sequential clustering of dynamic networks
NASA Astrophysics Data System (ADS)
Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia
2016-02-01
A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.
Extensions to the integral line-beam method for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.
1995-08-01
A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less
A novel method for pediatric heart sound segmentation without using the ECG.
Sepehri, Amir A; Gharehbaghi, Arash; Dutoit, Thierry; Kocharian, Armen; Kiani, A
2010-07-01
In this paper, we propose a novel method for pediatric heart sounds segmentation by paying special attention to the physiological effects of respiration on pediatric heart sounds. The segmentation is accomplished in three steps. First, the envelope of a heart sounds signal is obtained with emphasis on the first heart sound (S(1)) and the second heart sound (S(2)) by using short time spectral energy and autoregressive (AR) parameters of the signal. Then, the basic heart sounds are extracted taking into account the repetitive and spectral characteristics of S(1) and S(2) sounds by using a Multi-Layer Perceptron (MLP) neural network classifier. In the final step, by considering the diastolic and systolic intervals variations due to the effect of a child's respiration, a complete and precise heart sounds end-pointing and segmentation is achieved. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
Fuzzy observer-based control for maximum power-point tracking of a photovoltaic system
NASA Astrophysics Data System (ADS)
Allouche, M.; Dahech, K.; Chaabane, M.; Mehdi, D.
2018-04-01
This paper presents a novel fuzzy control design method for maximum power-point tracking (MPPT) via a Takagi and Sugeno (TS) fuzzy model-based approach. A knowledge-dynamic model of the PV system is first developed leading to a TS representation by a simple convex polytopic transformation. Then, based on this exact fuzzy representation, a H∞ observer-based fuzzy controller is proposed to achieve MPPT even when we consider varying climatic conditions. A specified TS reference model is designed to generate the optimum trajectory which must be tracked to ensure maximum power operation. The controller and observer gains are obtained in a one-step procedure by solving a set of linear matrix inequalities (LMIs). The proposed method has been compared with some classical MPPT techniques taking into account convergence speed and tracking accuracy. Finally, various simulation and experimental tests have been carried out to illustrate the effectiveness of the proposed TS fuzzy MPPT strategy.
Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, J. M.; Liu, Y.; Li, W.
2013-09-15
An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less
The assessment of personality disorders: implications for cognitive and behavior therapy.
Van Velzen, C J; Emmelkamp, P M
1996-08-01
This article reviews the comorbidity of personality disorders (PDs) and Axis I disorders and discusses implications for assessment and treatment. Pros and cons of various assessment methods are discussed. The co-occurrence of PDs with Axis I disorders is considerable; roughly half of patients with anxiety disorders, depressive disorders or eating disorders received a PD diagnosis. Comorbidity models are discussed and implications for assessment and treatment are provided. Regarding the impact of PDs on cognitive-behavioral treatment outcome for Axis I disorders, conflicting results are found due to differences in assessment methods, treatment strategies, and patient samples. It is argued that additional Axis I pathology should be taken into account when studying the impact of PDs on treatment outcome for the target Axis I disorders. Finally, it is argued that the interpersonal behavior of the PD patient and the therapeutic relationship deserve more attention in the assessment and treatment of patients with PDs.
Understanding dislocation mechanics at the mesoscale using phase field dislocation dynamics
Hunter, A.
2016-01-01
In this paper, we discuss the formulation, recent developments and findings obtained from a mesoscale mechanics technique called phase field dislocation dynamics (PFDD). We begin by presenting recent advancements made in modelling face-centred cubic materials, such as integration with atomic-scale simulations to account for partial dislocations. We discuss calculations that help in understanding grain size effects on transitions from full to partial dislocation-mediated slip behaviour and deformation twinning. Finally, we present recent extensions of the PFDD framework to alternative crystal structures, such as body-centred cubic metals, and two-phase materials, including free surfaces, voids and bi-metallic crystals. With several examples we demonstrate that the PFDD model is a powerful and versatile method that can bridge the length and time scales between atomistic and continuum-scale methods, providing a much needed understanding of deformation mechanisms in the mesoscale regime. PMID:27002063
Path suppression of strongly collapsing bubbles at finite and low Reynolds numbers.
Rechiman, Ludmila M; Dellavale, Damián; Bonetto, Fabián J
2013-06-01
We study, numerically and experimentally, three different methods to suppress the trajectories of strongly collapsing and sonoluminescent bubbles in a highly viscous sulfuric acid solution. A new numerical scheme based on the window method is proposed to account for the history force acting on a spherical bubble with variable radius. We could quantify the history force, which is not negligible in comparison with the primary Bjerknes force in this type of problem, and results are in agreement with the classical primary Bjerknes force trapping threshold analysis. Moreover, the present numerical implementation reproduces the spatial behavior associated with the positional and path instability of sonoluminescent argon bubbles in strongly gassed and highly degassed sulfuric acid solutions. Finally, the model allows us to demonstrate that spatially stationary bubbles driven by biharmonic excitation could be obtained with a different mode from the one used in previous reported experiments.
Feasibility study tool for semi-rigid joints design of high-rise buildings steel structures
NASA Astrophysics Data System (ADS)
Bagautdinov, Ruslan; Monastireva, Daria; Bodak, Irina; Potapova, Irina
2018-03-01
There are many ways to consider the final cost of the high-rise building structures and to define, which of their different variations are the most effective from different points of view. The research of Jaakko Haapio is conducted in Tampere University of Technology, which aims to develop a method that allows determining the manufacturing and installation costs of steel structures already at the tender phase while taking into account their details. This paper is aimed to make the analysis of the Feature-Based Costing Method for skeletal steel structures proposed by Jaakko Haapio. The most appropriate ways to improve the tool and to implement it in the Russian circumstances for high-rise building design are derived. Presented tool can be useful not only for the designers but, also, for the steel structures manufacturing organizations, which can help to utilize BIM technologies in the organization process and controlling on the factory.
Dynamically induced cascading failures in power grids.
Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito
2018-05-17
Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.
A mortar formulation including viscoelastic layers for vibration analysis
NASA Astrophysics Data System (ADS)
Paolini, Alexander; Kollmannsberger, Stefan; Rank, Ernst; Horger, Thomas; Wohlmuth, Barbara
2018-05-01
In order to reduce the transfer of sound and vibrations in structures such as timber buildings, thin elastomer layers can be embedded between their components. The influence of these elastomers on the response of the structures in the low frequency range can be determined accurately by using conforming hexahedral finite elements. Three-dimensional mesh generation, however, is yet a non-trivial task and mesh refinements which may be necessary at the junctions can cause a high computational effort. One remedy is to mesh the components independently from each other and to couple them using the mortar method. Further, the hexahedral mesh for the thin elastomer layer itself can be avoided by integrating its elastic behavior into the mortar formulation. The present paper extends this mortar formulation to take damping into account such that frequency response analyses can be performed more accurately. Finally, the proposed method is verified by numerical examples.
Multiple-reflection model of human skin and estimation of pigment concentrations
NASA Astrophysics Data System (ADS)
Ohtsuki, Rie; Tominaga, Shoji; Tanno, Osamu
2012-07-01
We describe a new method for estimating the concentrations of pigments in the human skin using surface spectral reflectance. We derive an equation that expresses the surface spectral reflectance of the human skin. First, we propose an optical model of the human skin that accounts for the stratum corneum. We also consider the difference between the scattering coefficient of the epidermis and that of the dermis. We then derive an equation by applying the Kubelka-Munk theory to an optical model of the human skin. Unlike a model developed in a recent study, the present equation considers pigments as well as multiple reflections and the thicknesses of the skin layers as factors that affect the color of the human skin. In two experiments, we estimate the pigment concentrations using the measured surface spectral reflectances. Finally, we confirm the feasibility of the concentrations estimated by the proposed method by evaluating the estimated pigment concentrations in the skin.
Green's functions in equilibrium and nonequilibrium from real-time bold-line Monte Carlo
NASA Astrophysics Data System (ADS)
Cohen, Guy; Gull, Emanuel; Reichman, David R.; Millis, Andrew J.
2014-03-01
Green's functions for the Anderson impurity model are obtained within a numerically exact formalism. We investigate the limits of analytical continuation for equilibrium systems, and show that with real time methods even sharp high-energy features can be reliably resolved. Continuing to an Anderson impurity in a junction, we evaluate two-time correlation functions, spectral properties, and transport properties, showing how the correspondence between the spectral function and the differential conductance breaks down when nonequilibrium effects are taken into account. Finally, a long-standing dispute regarding this model has involved the voltage splitting of the Kondo peak, an effect which was predicted over a decade ago by approximate analytical methods but never successfully confirmed by numerics. We settle the issue by demonstrating in an unbiased manner that this splitting indeed occurs. Yad Hanadiv-Rothschild Foundation, TG-DMR120085, TG-DMR130036, NSF CHE-1213247, NSF DMR 1006282, DOE ER 46932.
Spatial methods for deriving crop rotation history
NASA Astrophysics Data System (ADS)
Mueller-Warrant, George W.; Trippe, Kristin M.; Whittaker, Gerald W.; Anderson, Nicole P.; Sullivan, Clare S.
2017-08-01
Benefits of converting 11 years of remote sensing classification data into cropping history of agricultural fields included measuring lengths of rotation cycles and identifying specific sequences of intervening crops grown between final years of old grass seed stands and establishment of new ones. Spatial and non-spatial methods were complementary. Individual-year classification errors were often correctable in spreadsheet-based non-spatial analysis, whereas their presence in spatial data generally led to exclusion of fields from further analysis. Markov-model testing of non-spatial data revealed that year-to-year cropping sequences did not match average frequencies for transitions among crops grown in western Oregon, implying that rotations into new grass seed stands were influenced by growers' desires to achieve specific objectives. Moran's I spatial analysis of length of time between consecutive grass seed stands revealed that clustering of fields was relatively uncommon, with high and low value clusters only accounting for 7.1 and 6.2% of fields.
26 CFR 1.481-4 - Adjustments taken into account with consent.
Code of Federal Regulations, 2010 CFR
2010-04-01
... effecting a change in method of accounting, including the taxable year or years in which the amount of the... Commissioner's consent to a change in method of accounting. (b) An agreement to the terms and conditions of a change in method of accounting under § 1.446-1(e)(3), including the taxable year or years prescribed by...
Technique and final cause in psychoanalysis: four ways of looking at one moment.
Lear, Jonathan
2009-12-01
This paper argues that if one considers just a single clinical moment there may be no principled way to choose among different approaches to psychoanalytic technique. One must in addition take into account what Aristotle called the final cause of psychoanalysis, which this paper argues is freedom. However, freedom is itself an open-ended concept with many aspects that need to be explored and developed from a psychoanalytic perspective. This paper considers one analytic moment from the perspectives of the techniques of Paul Gray, Hans Loewald, the contemporary Kleinians and Jacques Lacan. It argues that, if we are to evaluate these techniques, we must take into account the different conceptions of freedom they are trying to facilitate.
NASA Astrophysics Data System (ADS)
Vrolijk, Mark; Ogawa, Takayuki; Camanho, Arthur; Biasutti, Manfredi; Lorenz, David
2018-05-01
As a result from the ever increasing demand to produce lighter vehicles, more and more advanced high-strength materials are used in automotive industry. Focusing on sheet metal cold forming processes, these materials require high pressing forces and exhibit large springback after forming. Due to the high pressing forces deformations occur in the tooling geometry, introducing dimensional inaccuracies in the blank and potentially impact the final springback behavior. As a result the tool deformations can have an impact on the final assembly or introduce cosmetic defects. Often several iterations are required in try-out to obtain the required tolerances, with costs going up to as much as 30% of the entire product development cost. To investigate the sheet metal part feasibility and quality, in automotive industry CAE tools are widely used. However, in current practice the influence of the tool deformations on the final part quality is generally neglected and simulations are carried out with rigid tools to avoid drastically increased calculation times. If the tool deformation is analyzed through simulation it is normally done at the end of the drawing prosses, when contact conditions are mapped on the die structure and a static analysis is performed to check the deflections of the tool. But this method does not predict the influence of these deflections on the final quality of the part. In order to take tool deformations into account during drawing simulations, ESI has developed the ability to couple solvers efficiently in a way the tool deformations can be real-time included in the drawing simulation without high increase in simulation time compared to simulations with rigid tools. In this paper a study will be presented which demonstrates the effect of tool deformations on the final part quality.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) generally constitutes the use of an impermissible method of accounting, requiring a change to a permissible...)(i). (ii) Change in method of accounting; adoption of method of accounting—(A) In general. The annual... change to or from either of these methods is a change in method of accounting that requires the consent...
ERIC Educational Resources Information Center
Mayr, Robert; Howells, Gwennan; Lewis, Rhonwen
2015-01-01
This study provides the first systematic account of word-final cluster acquisition in bilingual children. To this end, forty Welsh-English bilingual children differing in language dominance and age (2;6 to 5;0) participated in a picture-naming task in English and Welsh. The results revealed significant age and dominance effects on cluster…
ERIC Educational Resources Information Center
The White House, 2008
2008-01-01
This Final Report prepared by the White House Office of Faith-Based and Community Initiatives offers an account of President George W. Bush's Faith-Based and Community Initiative (FBCI) to the faith-based and other community organizations (FBCOs) that have joined in the battles against poverty, disease, and other social ills. The report emphasizes…
Computer-Based Method for On-Line Service and Compact Storage of Data
NASA Astrophysics Data System (ADS)
Vasilyev, S. V.
New method for compressing some types of astronomical data is proposed and discussed. The method is intended to provide astronomers more convenient technique for data retrieval from observational databases. The technique is based on the principal component method (PCM) of data analysis and their representation by characteristic vectors and eigenvalues. It allows to change the variety of data records by relatively small number of parameters. The initial data can be restored simply by linear combinations of obtained characteristic vectors. This approach can essentially reduce the dimensions of data being stored in databases and transferred through a netware. Our study shows that resulting volumes of data depend on the required accuracy of the representation and can be several times less than the initial ones. We note that using this method does not prevent applying the widely-used software for further data compressing. As the PCM is able to represent data analytically it can be used for proper adaptation of the requested information to the researcher's aims. Finally, taking into account that the method itself is a powerful tool for data smoothing, modelling and comparison we find it having good prospects for use in computer databases. Some examples of the PCM applications are described.
Lucas, Patricia J; Baird, Janis; Arai, Lisa; Law, Catherine; Roberts, Helen M
2007-01-01
Background The inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches. Methods A systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis. Results The processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity. Conclusion On the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality. PMID:17224044
26 CFR 1.466-2 - Special protective election for certain taxpayers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... method of accounting reasonably similar to the method described in § 1.451-4, to elect to treat that method of accounting as a proper one for those prior years. There are several differences between this... protective election (if treated as deductible under the accounting method for such years), even though such...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasquariello, Vito, E-mail: vito.pasquariello@tum.de; Hammerl, Georg; Örley, Felix
2016-02-15
We present a loosely coupled approach for the solution of fluid–structure interaction problems between a compressible flow and a deformable structure. The method is based on staggered Dirichlet–Neumann partitioning. The interface motion in the Eulerian frame is accounted for by a conservative cut-cell Immersed Boundary method. The present approach enables sub-cell resolution by considering individual cut-elements within a single fluid cell, which guarantees an accurate representation of the time-varying solid interface. The cut-cell procedure inevitably leads to non-matching interfaces, demanding for a special treatment. A Mortar method is chosen in order to obtain a conservative and consistent load transfer. Wemore » validate our method by investigating two-dimensional test cases comprising a shock-loaded rigid cylinder and a deformable panel. Moreover, the aeroelastic instability of a thin plate structure is studied with a focus on the prediction of flutter onset. Finally, we propose a three-dimensional fluid–structure interaction test case of a flexible inflated thin shell interacting with a shock wave involving large and complex structural deformations.« less
An improved predictive functional control method with application to PMSM systems
NASA Astrophysics Data System (ADS)
Li, Shihua; Liu, Huixian; Fu, Wenshu
2017-01-01
In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.
Optimized effective potential method and application to static RPA correlation
NASA Astrophysics Data System (ADS)
Fukazawa, Taro; Akai, Hisazumi
2015-03-01
The optimized effective potential (OEP) method is a promising technique for calculating the ground state properties of a system within the density functional theory. However, it is not widely used as its computational cost is rather high and, also, some ambiguity remains in the theoretical framework. In order to overcome these problems, we first introduced a method that accelerates the OEP scheme in a static RPA-level correlation functional. Second, the Krieger-Li-Iafrate (KLI) approximation is exploited to solve the OEP equation. Although seemingly too crude, this approximation did not reduce the accuracy of the description of the magnetic transition metals (Fe, Co, and Ni) examined here, the magnetic properties of which are rather sensitive to correlation effects. Finally, we reformulated the OEP method to render it applicable to the direct RPA correlation functional and other, more precise, functionals. Emphasis is placed on the following three points of the discussion: (i) level-crossing at the Fermi surface is taken into account; (ii) eigenvalue variations in a Kohn-Sham functional are correctly treated; and (iii) the resultant OEP equation is different from those reported to date.
Eigencentrality based on dissimilarity measures reveals central nodes in complex networks
Alvarez-Socorro, A. J.; Herrera-Almarza, G. C.; González-Díaz, L. A.
2015-01-01
One of the most important problems in complex network’s theory is the location of the entities that are essential or have a main role within the network. For this purpose, the use of dissimilarity measures (specific to theory of classification and data mining) to enrich the centrality measures in complex networks is proposed. The centrality method used is the eigencentrality which is based on the heuristic that the centrality of a node depends on how central are the nodes in the immediate neighbourhood (like rich get richer phenomenon). This can be described by an eigenvalues problem, however the information of the neighbourhood and the connections between neighbours is not taken in account, neglecting their relevance when is one evaluates the centrality/importance/influence of a node. The contribution calculated by the dissimilarity measure is parameter independent, making the proposed method is also parameter independent. Finally, we perform a comparative study of our method versus other methods reported in the literature, obtaining more accurate and less expensive computational results in most cases. PMID:26603652
5 CFR 511.612 - Finality of decision.
Code of Federal Regulations, 2010 CFR
2010-01-01
... mandatory and binding on all administrative, certifying, payroll, disbursing, and accounting officials of the Government. Agencies shall review their own classification decisions for identical, similar or...
Intermediate outcomes in randomized clinical trials: an introduction
2013-01-01
Background Intermediate outcomes are common and typically on the causal pathway to the final outcome. Some examples include noncompliance, missing data, and truncation by death like pregnancy (e.g. when the trial intervention is given to non-pregnant women and the final outcome is preeclampsia, defined only on pregnant women). The intention-to-treat approach does not account properly for them, and more appropriate alternative approaches like principal stratification are not yet widely known. The purposes of this study are to inform researchers that the intention-to-treat approach unfortunately does not fit all problems we face in experimental research, to introduce the principal stratification approach for dealing with intermediate outcomes, and to illustrate its application to a trial of long term calcium supplementation in women at high risk of preeclampsia. Methods Principal stratification and related concepts are introduced. Two ways for estimating causal effects are discussed and their application is illustrated using the calcium trial, where noncompliance and pregnancy are considered as intermediate outcomes, and preeclampsia is the main final outcome. Results The limitations of traditional approaches and methods for dealing with intermediate outcomes are demonstrated. The steps, assumptions and required calculations involved in the application of the principal stratification approach are discussed in detail in the case of our calcium trial. Conclusions The intention-to-treat approach is a very sound one but unfortunately it does not fit all problems we find in randomized clinical trials; this is particularly the case for intermediate outcomes, where alternative approaches like principal stratification should be considered. PMID:23510143
NASA Astrophysics Data System (ADS)
Xia, Y.; Yan, X.
2011-07-01
Nitrogen application rates (NARs) is often overestimated over the rice (Oryza sativa L.) growing season in the Taihu Lake region of China. This is largely because only individual nitrogen (N) losses are taken into account, or the inventory flows of reactive N have been limited solely to the farming process when evaluating environmental and economic effects of N fertilizer. Since N can permeate the ecosystem in numerous forms commencing from the acquisition of raw material, through manufacturing and use, to final losses in the farming process (e.g., N2O, NH3, NO3- leaching, etc.), the costs incurred also accumulate and should be taken into account if economically-optimal N rates (EONRs) are to be established. This study integrates important material and energy flows resulting from N use into a rice agricultural inventory that constitutes the hub of the life-cycle assessment (LCA) method. An economic evaluation is used to determine an environmental and economic NAR for the Taihu Lake region. The analysis reveals that production and exploitation processes consume the largest proportion of resources, accounting for 77.2 % and 22.3 % of total resources, respectively. Regarding environmental impact, global warming creates the highest cost with contributions stemming mostly from fertilizer production and raw material exploitation processes. Farming process incurs the biggest environmental impact of the three environmental impact categories considered, whereas transportation has a much smaller effect. When taking account of resource consumption and environmental cost, the marginal benefit of 1 kg rice would decrease from 2.4 to only 1.01 yuan. Accordingly, our current EONR has been evaluated at 185 kg N ha-1 for a single rice-growing season. This could enhance profitability, as well as reduce the N losses associated with rice growing.
Khan, Anzalee; Keefe, Richard S. E.
2017-01-01
Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as “avolition and anhedonia,” specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes. PMID:29410933
Expensing stock options: a fair-value approach.
Kaplan, Robert S; Palepu, Krishna G
2003-12-01
Now that companies such as General Electric and Citigroup have accepted the premise that employee stock options are an expense, the debate is shifting from whether to report options on income statements to how to report them. The authors present a new accounting mechanism that maintains the rationale underlying stock option expensing while addressing critics' concerns about measurement error and the lack of reconciliation to actual experience. A procedure they call fair-value expensing adjusts and eventually reconciles cost estimates made at grant date with subsequent changes in the value of the options, and it does so in a way that eliminates forecasting and measurement errors over time. The method captures the chief characteristic of stock option compensation--that employees receive part of their compensation in the form of a contingent claim on the value they are helping to produce. The mechanism involves creating entries on both the asset and equity sides of the balance sheet. On the asset side, companies create a prepaid-compensation account equal to the estimated cost of the options granted; on the owners'-equity side, they create a paid-in capital stock-option account for the same amount. The prepaid-compensation account is then expensed through the income statement, and the stock option account is adjusted on the balance sheet to reflect changes in the estimated fair value of the granted options. The amortization of prepaid compensation is added to the change in the option grant's value to provide the total reported expense of the options grant for the year. At the end of the vesting period, the company uses the fair value of the vested option to make a final adjustment on the income statement to reconcile any difference between that fair value and the total of the amounts already reported.
Petrie, Bruce; McAdam, Ewan J; Whelan, Mick J; Lester, John N; Cartmell, Elise
2013-04-01
An ultra performance liquid chromatography method coupled to a triple quadrupole mass spectrometer was developed to determine nonylphenol and 15 of its possible precursors (nonylphenol ethoxylates and nonylphenol carboxylates) in aqueous and particulate wastewater matrices. Final effluent method detection limits for all compounds ranged from 1.4 to 17.4 ng l(-1) in aqueous phases and from 1.4 to 39.4 ng g(-1) in particulate phases of samples. The method was used to measure the performance of a trickling filter wastewater treatment works, which are not routinely monitored despite their extensive usage. Relatively good removals of nonylphenol were observed over the biological secondary treatment process, accounting for a 53 % reduction. However, only an 8 % reduction in total nonylphenolic compound load was observed. This was explained by a shortening in ethoxylate chain length which initiated production of shorter polyethoxylates ranging from 1 to 4 ethoxylate units in length in final effluents. Modelling the possible impact of trickling filter discharge demonstrated that the nonylphenol environmental quality standard may be exceeded in receiving waters with low dilution ratios. In addition, there is a possibility that the EQS can be exceeded several kilometres downstream of the mixing zone due to the biotransformation of readily degradable short-chained precursors. This accentuates the need to monitor 'non-priority' parent compounds in wastewater treatment works since monitoring nonylphenol alone can give a false indication of process performance. It is thus recommended that future process performance monitoring and optimisation is undertaken using the full suite of nonylphenolic moieties which this method can facilitate.
Long-term Assessment of Carbon Budget of Terrestrial Ecosystems of Russia
NASA Astrophysics Data System (ADS)
Maksyutov, S. S.; Shvidenko, A.; Shchepashchenko, D.; Kraxner, F.
2016-12-01
We present a reanalysis of Terrestrial Ecosystems Full Verified Carbon Account (FCA) for Russia for the period of 2000-2012 based on understanding that FCA is an underspecified (fuzzy) system. The methodology used is based on integration of major approaches of carbon cycling assessment with following harmonizing and mutual constraints of the results received by independent methods. The landscape-ecosystem approach (LEA) was used for a systemic design of the account and empirical assessment of the LEA based on a relevant combination of pool-based and flux-based methods. The information background of the LEA is presented in a form of an Integrated Land Information System which include the hybrid landcover (HLC) at resolution of 150 m2 and relevant attributive databases. HLC was developed based on remote sensing multi-sensor concept (using 12 different satellite products), geographic weighted regression and Geo-wiki validation (Schepaschenko et al. 2015). Carbon fluxes which are based on long-term measurements were corrected based on seasonal climatic indicators of individual years. Uncertainties of intermediate and final results within LEA are calculated by sequential algorithms. Results of the LEA were compared with those obtained by eddy covariance, process-based models of different types, inverse modeling and GOSAT Level 4 Products. Uncertainty of the final results was calculated based on the Bayesian approach. It has been shown that terrestrial vegetation of Russia served as a net carbon sink at range of 480-650 Tg C yr-1 during the studied period, mostly at the expense of forests, with interannual variation of around 10-20% at the country's scale. The regional variation was significantly higher that depends on specifics of seasonal weather and accompanying regimes of natural disturbances. The overall uncertainty of the FCA is estimated at 22-25% at the annual basis and 7-9% for the period's average.
An empirical method to cluster objective nebulizer adherence data among adults with cystic fibrosis.
Hoo, Zhe H; Campbell, Michael J; Curley, Rachael; Wildman, Martin J
2017-01-01
The purpose of using preventative inhaled treatments in cystic fibrosis is to improve health outcomes. Therefore, understanding the relationship between adherence to treatment and health outcome is crucial. Temporal variability, as well as absolute magnitude of adherence affects health outcomes, and there is likely to be a threshold effect in the relationship between adherence and outcomes. We therefore propose a pragmatic algorithm-based clustering method of objective nebulizer adherence data to better understand this relationship, and potentially, to guide clinical decisions. This clustering method consists of three related steps. The first step is to split adherence data for the previous 12 months into four 3-monthly sections. The second step is to calculate mean adherence for each section and to score the section based on mean adherence. The third step is to aggregate the individual scores to determine the final cluster ("cluster 1" = very low adherence; "cluster 2" = low adherence; "cluster 3" = moderate adherence; "cluster 4" = high adherence), and taking into account adherence trend as represented by sequential individual scores. The individual scores should be displayed along with the final cluster for clinicians to fully understand the adherence data. We present three cases to illustrate the use of the proposed clustering method. This pragmatic clustering method can deal with adherence data of variable duration (ie, can be used even if 12 months' worth of data are unavailable) and can cluster adherence data in real time. Empirical support for some of the clustering parameters is not yet available, but the suggested classifications provide a structure to investigate parameters in future prospective datasets in which there are accurate measurements of nebulizer adherence and health outcomes.
NASA Astrophysics Data System (ADS)
Ferreira, G. G.; Borges, E.; Braga, J. P.; Belchior, J. C.
Cluster structures are discussed in a nonrigid analysis, using a modified minima search method based on stochastic processes and classical dynamics simulations. The relaxation process is taken into account considering the internal motion of the Cl2 molecule. Cluster structures are compared with previous works in which the Cl2 molecule is assumed to be rigid. The interactions are modeled using pair potentials: the Aziz and Lennard-Jones potentials for the Ar==Ar interaction, a Morse potential for the Cl==Cl interaction, and a fully spherical/anisotropic Morse-Spline-van der Waals (MSV) potential for the Ar==Cl interaction. As expected, all calculated energies are lower than those obtained in a rigid approximation; one reason may be attributed to the nonrigid contributions of the internal motion of the Cl2 molecule. Finally, the growing processes in molecular clusters are discussed, and it is pointed out that the growing mechanism can be affected due to the nonrigid initial conditions of smaller clusters such as ArnCl2 (n ? 4 or 5), which are seeds for higher-order clusters.
Computer-Aided Construction at Designing Reinforced Concrete Columns as Per Ec
NASA Astrophysics Data System (ADS)
Zielińska, M.; Grębowski, K.
2015-02-01
The article presents the authors' computer program for designing and dimensioning columns in reinforced concrete structures taking into account phenomena affecting their behaviour and information referring to design as per EC. The computer program was developed with the use of C++ programming language. The program guides the user through particular dimensioning stages: from introducing basic data such as dimensions, concrete class, reinforcing steel class and forces affecting the column, through calculating the creep coefficient taking into account the impact of imperfection depending on the support scheme and also the number of mating members at load shit, buckling length, to generating the interaction curve graph. The final result of calculations provides two dependence points calculated as per methods of nominal stiffness and nominal curvature. The location of those points relative to the limit curve determines whether the column load capacity is assured or has been exceeded. The content of the study describes in detail the operation of the computer program and the methodology and phenomena which are indispensable at designing axially and eccentrically the compressed members of reinforced concrete structures as per the European standards.
NASA Astrophysics Data System (ADS)
Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina
2016-06-01
In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.
Numerical modeling of the debris flows runout
NASA Astrophysics Data System (ADS)
Federico, Francesco; Cesali, Chiara
2017-06-01
Rapid debris flows are identified among the most dangerous of all landslides. Due to their destructive potential, the runout length has to be predicted to define the hazardous areas and design safeguarding measures. To this purpose, a continuum model to predict the debris flows mobility is developed. It is based on the well known depth-integrated avalanche model proposed by Savage and Hutter (S&H model) to simulate the dry granular materials flows. Conservation of mass and momentum equations, describing the evolving geometry and the depth averaged velocity distribution, are re-written taking into account the effects of the interstitial pressures and the possible variation of mass along the motion due to erosion/deposition processes. Furthermore, the mechanical behaviour of the debris flow is described by a recently developed rheological law, which allows to take into account the dissipative effects of the grain inelastic collisions and friction, simultaneously acting within a `shear layer', typically at the base of the debris flows. The governing PDEs are solved by applying the finite difference method. The analysis of a documented case is finally carried out.
Lissek, Shmuel
2012-04-01
The past two decades have brought dramatic progress in the neuroscience of anxiety due, in no small part, to animal findings specifying the neurobiology of Pavlovian fear-conditioning. Fortuitously, this neurally mapped process of fear learning is widely expressed in humans, and has been centrally implicated in the etiology of clinical anxiety. Fear-conditioning experiments in anxiety patients thus represent a unique opportunity to bring recent advances in animal neuroscience to bear on working, brain-based models of clinical anxiety. The current presentation details the neural basis and clinical relevance of fear conditioning, and highlights generalization of conditioned fear to stimuli resembling the conditioned danger cue as one of the more robust conditioning markers of clinical anxiety. Studies testing such generalization across a variety of anxiety disorders (panic, generalized anxiety disorder, and social anxiety disorder) with systematic methods developed in animals will next be presented. Finally, neural accounts of overgeneralization deriving from animal and human data will be described with emphasis given to implications for the neurobiology and treatment of clinical anxiety. © 2012 Wiley Periodicals, Inc.
Towards Risk Based Design for NASA's Missions
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Barrientos, Francesca; Meshkat, Leila
2004-01-01
This paper describes the concept of Risk Based Design in the context of NASA s low volume, high cost missions. The concept of accounting for risk in the design lifecycle has been discussed and proposed under several research topics, including reliability, risk analysis, optimization, uncertainty, decision-based design, and robust design. This work aims to identify and develop methods to enable and automate a means to characterize and optimize risk, and use risk as a tradeable resource to make robust and reliable decisions, in the context of the uncertain and ambiguous stage of early conceptual design. This paper first presents a survey of the related topics explored in the design research community as they relate to risk based design. Then, a summary of the topics from the NASA-led Risk Colloquium is presented, followed by current efforts within NASA to account for risk in early design. Finally, a list of "risk elements", identified for early-phase conceptual design at NASA, is presented. The purpose is to lay the foundation and develop a roadmap for future work and collaborations for research to eliminate and mitigate these risk elements in early phase design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesseyre, Y.
The study allowed development of an original measuring system for mobility, involving simultaneously a repulsive electrical field and a continuous gas flow. It made it possible to define a model to calculate ionic transparency of grates, taking into account electrical fields below and above them, ion mobility, speed of gas flow and geometric transparency. Calculation of the electrical field proceeded in a plane-plane system, taking into account the space load and diffusion; a graphic method was developed to determine the field, thus avoiding numerical integration of the diffusion equation. The tracings of the mobility spectra obtained in different gases mademore » it possible to determine characteristic discrete mobility values comparable to those observed by other more sophisticated systems for measuring mobilities, such as the flight time systems. Detection of pollutants in weak concentration in dry air was shown. However, the presence of water vapor in the air forms agglomerates around the ions formed, reducing resolution of the system and making it less applicable under normal atmospheric conditions.« less
Chang, Edward C; Yu, Elizabeth A; Kahle, Emma R; Du, Yifeng; Chang, Olivia D; Jilani, Zunaira; Yu, Tina; Hirsch, Jameson K
2017-10-01
We examined an additive and interactive model involving domestic partner violence (DPV) and hope in accounting for suicidal behaviors in a sample of 98 community adults. Results showed that DPV accounted for a significant amount of variance in suicidal behaviors. Hope further augmented the prediction model and accounted for suicidal behaviors beyond DPV. Finally, we found that DPV significantly interacted with both dimensions of hope to further account for additional variance in suicidal behaviors above and beyond the independent effects of DPV and hope. Implications for the role of hope in the relationship between DPV and suicidal behaviors are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nauss, R.
1994-12-31
In this review we describe three integer programming applications involving fixed income securities. A bond trading model is presented that features a number of possible different objectives and collections of constraints including future interest rate scenarios. A mortgage backed security (MBS) financing model that accounts for potential defaults in the MBS is also presented. Finally we describe an approach to allocate collections of bank securities into three categories: hold to maturity, available for sale, or trading. Placement of securities in these categories affects the capital, net income, and liquidity of a bank according to new accounting rules promulgated by themore » Financial Accounting Standards Board.« less
26 CFR 1.446-1 - General rule for methods of accounting.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., as a cost taken into account in computing cost of goods sold, as a cost allocable to a long-term...; section 460, relating to the long-term contract methods. In addition, special methods of accounting for... regulations under sections 471 and 472), a change from the cash or accrual method to a long-term contract...
Diethylstilbestrol in fish tissue determined through subcritical fluid extraction and with GC-MS
NASA Astrophysics Data System (ADS)
Qiao, Qinghui; Shi, Nianrong; Feng, Xiaomei; Lu, Jie; Han, Yuqian; Xue, Changhu
2016-06-01
As the key point in sex hormone analysis, sample pre-treatment technology has attracted scientists' attention all over the world, and the development trend of sample preparation forwarded to faster and more efficient technologies. Taking economic and environmental concerns into account, subcritical fluid extraction as a faster and more efficient method has stood out as a sample pre-treatment technology. This new extraction technology can overcome the shortcomings of supercritical fluid and achieve higher extraction efficiency at relatively low pressures and temperatures. In this experiment, a simple, sensitive and efficient method has been developed for the determination of diethylstilbestrol (DES) in fish tissue using subcritical 1,1,1,2-tetrafluoroethane (R134a) extraction in combination with gas chromatography-mass spectrometry (GC-MS). After extraction, freezing-lipid filtration was utilized to remove fatty co-extract. Further purification steps were performed with C18 and NH2 solid phase extraction (SPE). Finally, the analyte was derived by heptafluorobutyric anhydride (HFBA), followed by GC-MS analysis. Response surface methodology (RSM) was employed to optimizing the extraction condition, and the optimized was as follows: extraction pressure, 4.3 MPa; extraction temperature, 26°C; amount of co-solvent volume, 4.7 mL. Under this condition, at a spiked level of 1, 5, 10 μg kg-1, the mean recovery of DES was more than 90% with relative standard deviations (RSDs) less than 10%. Finally, the developed method has been successfully used to analyzing the real samples.
Simultaneous isoform discovery and quantification from RNA-seq.
Hiller, David; Wong, Wing Hung
2013-05-01
RNA sequencing is a recent technology which has seen an explosion of methods addressing all levels of analysis, from read mapping to transcript assembly to differential expression modeling. In particular the discovery of isoforms at the transcript assembly stage is a complex problem and current approaches suffer from various limitations. For instance, many approaches use graphs to construct a minimal set of isoforms which covers the observed reads, then perform a separate algorithm to quantify the isoforms, which can result in a loss of power. Current methods also use ad-hoc solutions to deal with the vast number of possible isoforms which can be constructed from a given set of reads. Finally, while the need of taking into account features such as read pairing and sampling rate of reads has been acknowledged, most existing methods do not seamlessly integrate these features as part of the model. We present Montebello, an integrated statistical approach which performs simultaneous isoform discovery and quantification by using a Monte Carlo simulation to find the most likely isoform composition leading to a set of observed reads. We compare Montebello to Cufflinks, a popular isoform discovery approach, on a simulated data set and on 46.3 million brain reads from an Illumina tissue panel. On this data set Montebello appears to offer a modest improvement over Cufflinks when considering discovery and parsimony metrics. In addition Montebello mitigates specific difficulties inherent in the Cufflinks approach. Finally, Montebello can be fine-tuned depending on the type of solution desired.
Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny
2016-01-01
Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin.
Student Preferences for Instructional Methods in an Accounting Curriculum
ERIC Educational Resources Information Center
Abeysekera, Indra
2015-01-01
Student preferences among instructional methods are largely unexplored across the accounting curriculum. The algorithmic rigor of courses and the societal culture can influence these preferences. This study explored students' preferences of instructional methods for learning in six courses of the accounting curriculum that differ in algorithmic…
The idea that the methods and models of accounting and bookkeeping might be useful in describing, understanding, and managing environmental systems is implicit in the title of H.T. Odum's book, Environmental Accounting: Emergy and Environmental Decision Making. In this paper, I ...
NASA Astrophysics Data System (ADS)
Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.
2018-03-01
The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.
Ducrot, Virginie; Usseglio-Polatera, Philippe; Péry, T Alexandre R R; Mouthon, Jacques; Lafont, Michel; Roger, Marie-Claude; Garric, Jeanne; Férard, Jean-François
2005-09-01
An original species-selection method for the building of test batteries is presented. This method is based on the statistical analysis of the biological and ecological trait patterns of species. It has been applied to build a macroinvertebrate test battery for the assessment of sediment toxicity, which efficiently describes the diversity of benthic macroinvertebrate biological responses to toxicants in a large European lowland river. First, 109 potential representatives of benthic communities of European lowland rivers were selected from a list of 479 taxa, considering 11 biological traits accounting for the main routes of exposure to a sediment-bound toxicant and eight ecological traits providing an adequate description of habitat characteristics used by the taxa. Second, their biological and ecological trait patterns were compared using coinertia analysis. This comparison allowed the clustering of taxa into groups of organisms that exhibited similar life-history characteristics, physiological and behavioral features, and similar habitat use. Groups exhibited various sizes (7-35 taxa), taxonomic compositions, and biological and ecological features. Main differences among group characteristics concerned morphology, substrate preferendum and habitat utilization, nutritional features, maximal size, and life-history strategy. Third, the best representatives of the mean biological and ecological characteristics of each group were included in the test battery. The final selection was composed of Chironomus riparius (Insecta: Diptera), Branchiura sowerbyi (Oligochaeta: Tubificidae), Lumbriculus variegatus (Oligochaeta: Lumbriculidae), Valvata piscinalis (Gastropoda: Valvatidae), and Sericostoma personatum (Trichoptera: Sericostomatidae). This approach permitted the biological and ecological variety of the battery to be maximized. Because biological and ecological traits of taxa determine species sensitivity, such maximization should permit the battery to better account for the sensitivity range within a community.
Corbel, Michael J; Das, Rose Gaines; Lei, Dianliang; Xing, Dorothy K L; Horiuchi, Yoshinobu; Dobbelaer, Roland
2008-04-07
This report reflects the discussion and conclusions of a WHO group of experts from National Regulatory Authorities (NRAs), National Control Laboratories (NCLs), vaccine industries and other relevant institutions involved in standardization and control of diphtheria, tetanus and pertussis vaccines (DTP), held on 20-21 July 2006 and 28-30 March 2007, in Geneva Switzerland for the revision of WHO Manual for quality control of DTP vaccines. Taking into account recent developments and standardization in quality control methods and the revision of WHO recommendations for D, T, P vaccines, and a need for updating the manual has been recognized. In these two meetings the current situation of quality control methods in terms of potency, safety and identity tests for DTP vaccines and statistical analysis of data were reviewed. Based on the WHO recommendations and recent validation of testing methods, the content of current manual were reviewed and discussed. The group agreed that the principles to be observed in selecting methods included identifying those critical for assuring safety, efficacy and quality and which were consistent with WHO recommendations/requirements. Methods that were well recognized but not yet included in current Recommendations should be taken into account. These would include in vivo and/or in vitro methods for determining potency, safety testing and identity. The statistical analysis of the data should be revised and updated. It was noted that the mouse based assays for toxoid potency were still quite widely used and it was desirable to establish appropriate standards for these to enable the results to be related to the standard guinea pig assays. The working group was met again to review the first drafts and to input further suggestions or amendments to the contributions of the drafting groups. The revised manual was to be finalized and published by WHO.
ERIC Educational Resources Information Center
Torrance, Harry
2018-01-01
There are sound educational and examining reasons for the use of coursework assessment and practical assessment of student work by teachers in schools for purposes of reporting examination grades. Coursework and practical work test a range of different curriculum goals to final papers and increase the validity and reliability of the result.…
Listening to Student Views on the Transition from Work Placement to the Final Year
ERIC Educational Resources Information Center
Anderson, Pamela; Novakovic, Yvonne
2017-01-01
This paper addresses a gap in the literature on student work placements, specifically the challenges of returning to final-year study after a year out. We focus on students in an Accountancy and Finance Department at one UK University who alerted us to the ways in which they struggled during the transition back to full-time study. Their accounts…
The politics of accountability for school curriculum: An Australian case study
NASA Astrophysics Data System (ADS)
Smithson, Alan
1987-03-01
This normative-descriptive case study of accountability for state school curriculum in South Australia has the following objectives. First, to seek to draw a distinction between accountability and responsibility: terms which have been confused by two South Australian Directors-General of Education (position akin to C.E.O. in the U.K. and Superintendent in the U.S.A.) with important consequences. Second, to present a model of accountability for state school curriculum, by which accountability for such curriculum may be judged democratic or non-democratic, and against which accountability for curriculum in South Australian state schools will be gauged. Third, to show that whilst the South Australian school system exhibits a large measure of bureaucratic or technocratic accountability for curriculum, there is no effective democratic accountability for curriculum, and to indicate a remedy for this situation. Finally, to point out the wider significance of the South Australian case study, and suggest that democracies currently re-structuring their educational systems would do well to keep the need for democratic accountability foremost in mind.
7 CFR 1781.21 - Borrower accounting methods, management, reporting, and audits.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 12 2010-01-01 2010-01-01 false Borrower accounting methods, management, reporting... DEVELOPMENT (RCD) LOANS AND WATERSHED (WS) LOANS AND ADVANCES § 1781.21 Borrower accounting methods, management, reporting, and audits. These activities will be handled in accordance with the provisions of...
7 CFR 1767.13 - Departures from the prescribed RUS Uniform System of Accounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... accounting methodologies and principles that depart from the provisions herein; or (2) File with such... borrower's rates, based upon accounting methods and principles inconsistent with the provisions of this... accounting methods or principles for the borrower that are inconsistent with the provisions of this part, the...
26 CFR 1.446-1 - General rule for methods of accounting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... also the accounting treatment of any item. Examples of such over-all methods are the cash receipts and... special items include the accounting treatment prescribed for research and experimental expenditures, soil... books of account and on his return, as for example, a reconciliation of any differences between such...
Accountable Information Flow for Java-Based Web Applications
2010-01-01
runtime library Swift server runtime Java servlet framework HTTP Web server Web browser Figure 2: The Swift architecture introduced an open-ended...On the server, the Java application code links against Swift’s server-side run-time library, which in turn sits on top of the standard Java servlet ...AFRL-RI-RS-TR-2010-9 Final Technical Report January 2010 ACCOUNTABLE INFORMATION FLOW FOR JAVA -BASED WEB APPLICATIONS
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
48 CFR 9904.420-60 - Illustrations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... typing its overhead cost pool. In submitting a proposal, the engineering department assigns several... established accounting practice does not charge the cost of typing directly to final cost objectives, the...
... frames were skipping. It was disorienting," the Pittsburgh accountant recalls. The dizziness didn't go away. Finally ... treatment. "Happy to make a small contribution to public health," he agreed to participate in a clinical ...
Preston, Robyn; Larkins, Sarah; Taylor, Judy; Judd, Jenni
2016-10-01
This paper addresses the question of how social accountability is conceptualised by staff, students and community members associated with four medical schools aspiring to be socially accountable in two countries. Using a multiple case study approach this research explored how contextual issues have influenced social accountability at four medical schools: two in Australia and two in the Philippines. This paper reports on how research participants understood social accountability. Seventy-five participants were interviewed including staff, students, health sector representatives and community members. Field notes were taken and a documentary analysis was completed. Overall there were three common understandings. Socially accountable medical education was about meeting workforce, community and health needs. Social accountability was also determined by the nature and content of programs the school implemented or how it operated. Finally, social accountability was deemed a personal responsibility. The broad consensus masked the divergent perspectives people held within each school. The assumption that social accountability is universally understood could not be confirmed from these data. To strengthen social accountability it is useful to learn from these institutions' experiences to contribute to the development of the theory and practice of activities within socially accountable medical schools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This volume and its appendixes supplement the Advisory Committee`s final report by reporting how we went about looking for information concerning human radiation experiments and intentional releases, a description of what we found and where we found it, and a finding aid for the information that we collected. This volume begins with an overview of federal records, including general descriptions of the types of records that have been useful and how the federal government handles these records. This is followed by an agency-by-agency account of the discovery process and descriptions of the records reviewed, together with instructions on how tomore » obtain further information from those agencies. There is also a description of other sources of information that have been important, including institutional records, print resources, and nonprint media and interviews. The third part contains brief accounts of ACHRE`s two major contemporary survey projects (these are described in greater detail in the final report and another supplemental volume) and other research activities. The final section describes how the ACHRE information-nation collections were managed and the records that ACHRE created in the course of its work; this constitutes a general finding aid for the materials deposited with the National Archives. The appendices provide brief references to federal records reviewed, descriptions of the accessions that comprise the ACHRE Research Document Collection, and descriptions of the documents selected for individual treatment. Also included are an account of the documentation available for ACHRE meetings, brief abstracts of the almost 4,000 experiments individually described by ACHRE staff, a full bibliography of secondary sources used, and other information.« less
NASA Astrophysics Data System (ADS)
Vogelgesang, Jonas; Schorr, Christian
2016-12-01
We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.
On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending
NASA Astrophysics Data System (ADS)
Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong
2017-11-01
A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.
Lucas, Patricia J; Baird, Janis; Arai, Lisa; Law, Catherine; Roberts, Helen M
2007-01-15
The inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches. A systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis. The processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity. On the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faddegon, B.A.; Villarreal-Barajas, J.E.; Mt. Diablo Regional Cancer Center, 2450 East Street, Concord, California
2005-11-15
The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for amore » particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum of 5.6% at 21 MeV. Contributions from the collimator effect were largest for the large field size, high beam energy, and shallow depths, reaching a maximum of 4.7% at 21 MeV. Both shielding contributions and the collimator effect need to be taken into account to achieve an accuracy of 2%. FAST takes explicit account of the shielding contributions. With the collimator effect set to that of the largest field in the FAST calculation, the difference in dose on the central axis (product of ROF and PDD) between FAST and full simulation was generally under 2%. The maximum difference of 2.5% exceeded the statistical precision of the calculation by four standard deviations. This occurred at 18 MeV for the 2.5x2.5 cm{sup 2} field. The differences are due to the method used to account for the collimator effect.« less
Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...
2013-02-23
We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less
Welding at the Kennedy Space Center.
NASA Technical Reports Server (NTRS)
Clautice, W. E.
1973-01-01
Brief description of the nature of the mechanical equipment at a space launch complex from a welding viewpoint. including an identification of the major welding applications used in the construction of this complex. The role played by welding in the ground support equipment is noted, including the welded structures and systems required in the vehicle assembly building, the mobile launchers, transporters, mobile service structure, launch pad and launch site, the propellants system, the pneumatics system, and the environmental control system. The welding processes used at the Kennedy Space Center are reviewed, and a particularly detailed account is given of the design and fabrication of the liquid hydrogen and liquid oxygen storage spheres and piping. Finally, the various methods of testing and inspecting the storage spheres are cited.
Sutherland, Chris; Royle, Andy
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Estimating abundance: Chapter 27
Royle, J. Andrew
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
The mechanisms of labor division from the perspective of individual optimization
NASA Astrophysics Data System (ADS)
Zhu, Lirong; Chen, Jiawei; Di, Zengru; Chen, Liujun; Liu, Yan; Stanley, H. Eugene
2017-12-01
Although the tools of complexity research have been applied to the phenomenon of labor division, its underlying mechanisms are still unclear. Researchers have used evolutionary models to study labor division in terms of global optimization, but focusing on individual optimization is a more realistic, real-world approach. We do this by first developing a multi-agent model that takes into account information-sharing and learning-by-doing and by using simulations to demonstrate the emergence of labor division. We then use a master equation method and find that the computational results are consistent with the results of the simulation. Finally we find that the core underlying mechanisms that cause labor division are learning-by-doing, information cost, and random fluctuation.
Mingguang, Zhang; Juncheng, Jiang
2008-10-30
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.
The Breivik case and what psychiatrists can learn from it
Melle, Ingrid
2013-01-01
In the afternoon of July 22, 2011, Norwegian Anders Behring Breivik killed 77 persons, many of them children and youths, in two separate events. On August 24, 2012, he was sentenced to 21 years in prison. Breivik went through two forensic evaluations: the first concluded that he had a psychotic disorder, thus being legally unaccountable, whereas the second concluded that he had a personality disorder, thus being legally accountable. This article first describes Breivik's background and his crimes. This is followed by an overview of the two forensic evaluations, their methods, contents and disagreements, and how these issues were handled by the court in the verdict. Finally, the article focuses on some lessons psychiatrists can take from the case. PMID:23471788
The Breivik case and what psychiatrists can learn from it.
Melle, Ingrid
2013-02-01
In the afternoon of July 22, 2011, Norwegian Anders Behring Breivik killed 77 persons, many of them children and youths, in two separate events. On August 24, 2012, he was sentenced to 21 years in prison. Breivik went through two forensic evaluations: the first concluded that he had a psychotic disorder, thus being legally unaccountable, whereas the second concluded that he had a personality disorder, thus being legally accountable. This article first describes Breivik's background and his crimes. This is followed by an overview of the two forensic evaluations, their methods, contents and disagreements, and how these issues were handled by the court in the verdict. Finally, the article focuses on some lessons psychiatrists can take from the case. Copyright © 2013 World Psychiatric Association.
Modelling and temporal performances evaluation of networked control systems using (max, +) algebra
NASA Astrophysics Data System (ADS)
Ammour, R.; Amari, S.
2015-01-01
In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.
Experimental investigation of three-wave interactions of capillary surface-waves
NASA Astrophysics Data System (ADS)
Berhanu, Michael; Cazaubiel, Annette; Deike, Luc; Jamin, Timothee; Falcon, Eric
2014-11-01
We report experiments studying the non-linear interaction between two crossing wave-trains of gravity-capillary surface waves generated in a closed laboratory tank. Using a capacitive wave gauge and Diffusive Light Photography method, we detect a third wave of smaller amplitude whose frequency and wavenumber are in agreement with the weakly non-linear triadic resonance interaction mechanism. By performing experiments in stationary and transient regimes and taking into account the viscous dissipation, we estimate directly the growth rate of the resonant mode in comparison with theory. These results confirm at least qualitatively and extend earlier experimental results obtained only for unidirectional wave train. Finally we discuss relevance of three-wave interaction mechanisms in recent experiment studying capillary wave turbulence.
JTHERGAS: Thermodynamic Estimation from 2D Graphical Representations of Molecules
Blurock, Edward; Warth, V.; Grandmougin, X.; Bounaceur, R.; Glaude, P.A.; Battin-Leclerc, F.
2013-01-01
JTHERGAS is a versatile calculator (implemented in JAVA) to estimate thermodynamic information from two dimensional graphical representations of molecules and radicals involving covalent bonds based on the Benson additivity method. The versatility of JTHERGAS stems from its inherent philosophy that all the fundamental data used in the calculation should be visible, to see exactly where the final values came from, and modifiable, to account for new data that can appear in the literature. The main use of this method is within automatic combustion mechanism generation systems where fast estimation of a large number and variety of chemical species is needed. The implementation strategy is based on meta-atom definitions and substructure analysis allowing a highly extensible database without modification of the core algorithms. Several interfaces for the database and the calculations are provided from terminal line commands, to graphical interfaces to web-services. The first order estimation of thermodynamics is based summing up the contributions of each heavy atom bonding description. Second order corrections due to steric hindrance and ring strain are made. Automatic estimate of contributions due to internal, external and optical symmetries are also made. The thermodynamical data for radicals is calculated by taking the difference due to the lost of a hydrogen radical taking into account changes in symmetry, spin, rotations, vibrations and steric hindrances. The software is public domain and is based on standard libraries such as CDK and CML. PMID:23761949
Carbohydrates and sports practice: a Twitter virtual ethnography
Rodríguez-Martín, Beatriz; Castillo, Carlos Alberto
2017-02-01
Introduction: Although carbohydrates consumption is a key factor to enhance sport performance, intake levels seem questioned by some amateur athletes, leading to develop an irrational aversion to carbohydrate known as “carbophobia”. On the other hand, food is the origin of virtual communities erected as a source of knowledge and a way to exchange information. Despite this, very few studies have analysed the influence of social media in eating behaviours. Objectives: To know the conceptualizations about carbohydrates intake and eating patterns related to carbophobia expressed in amateur athletes’ Twitter accounts. Methods: Qualitative research designed from Hine’s Virtual Ethnography. Virtual immersion was used for data collection in Twitter open accounts in a theoretical sample of tweets from amateur athletes. Discourse analysis of narrative information of tweets was carried out through open, axial and selective coding process and the constant comparison method. Results: Data analysis revealed four main categories that offered a picture of conceptualizations of carbohydrates: carbohydrates as suspects or guilty from slowing down training, carbophobia as a lifestyle, carbophobia as a religion and finally the love/hate relationship with carbohydrates. Conclusions: Low-carbohydrate diet is considered a healthy lifestyle in some amateur athletes. The results of this study show the power of virtual communication tools such as Twitter to support, promote and maintain uncommon and not necessarily healthy eating behaviours. Future studies should focus on the context in which these practices appear.
Understanding Grammars through Diachronic Change
Madariaga, Nerea
2017-01-01
In this paper, I will vindicate the importance of syntactic change for the study of synchronic stages of natural languages, according to the following outline. First, I will analyze the relationship between the diachrony and synchrony of grammars, introducing some basic concepts: the notions of I-language/E-language, the role of Chomsky's (2005) three factors in language change, and some assumptions about language acquisition. I will briefly describe the different approaches to syntactic change adopted in generative accounts, as well as their assumptions and implications (Lightfoot, 1999, 2006; van Gelderen, 2004; Biberauer et al., 2010; Roberts, 2012). Finally, I will illustrate the convenience of introducing the diachronic dimension into the study of at least certain synchronic phenomena with the help of a practical example: variation in object case marking of several verbs in Modern Russian, namely, the verbs denoting avoidance and the verbs slušat'sja “obey” and dožidat'sja “expect,” which show two object case-marking patterns, genitive case in standard varieties and accusative case in colloquial varieties. To do so, I will review previous descriptive and/or functionalist accounts on this or equivalent phenomena (Jakobson, 1984 [1936]; Clancy, 2006; Nesset and Kuznetsova, 2015a,b). Then, I will present a formal—but just synchronic—account, applying Sigurðsson (2011) hypothesis on the expression of morphological case to this phenomenon. Finally, I will show that a formal account including the diachronic dimension is superior (i.e., more explanative) than purely synchronic accounts. PMID:28824474
Snipes, Shedra A.; Cooper, Sharon P.; Shipp, Eva M.
2017-01-01
Objective This paper describes how perceived discrimination shapes the way Latino farmworkers encounter injuries and seek out treatment. Methods After 5 months of ethnographic fieldwork, 89 open-ended, semi-structured interviews were analyzed. NVivo was used to code and qualitatively organize the interviews and field notes. Finally, codes, notes, and co-occurring dynamics were used to iteratively assess the data for major themes. Results The primary source of perceived discrimination was the “boss” or farm owner. Immigrant status was also a significant influence on how farmworkers perceived the discrimination. Specifically, the ability to speak English and length of stay in the United States were related to stronger perceptions of discrimination. Finally, farm owners compelled their Latino employees to work through their injuries without treatment. Conclusions This ethnographic account brings attention to how discrimination and lack of worksite protections are implicated in farmworkers' injury experiences, and suggests the need for policies that better safeguards vulnerable workers. PMID:27749157
McCrindle, Brian W.; Zak, Victor; Sleeper, Lynn A.; Paridon, Stephen M.; Colan, Steven D.; Geva, Tal; Mahony, Lynn; Li, Jennifer S.; Breitbart, Roger E.; Margossian, Renee; Williams, Richard V.; Gersony, Welton M.; Atz, Andrew M.
2009-01-01
Background Patients after Fontan are at risk for suboptimal functional health status, and associations with laboratory measures are important for planning interventions and outcome measures for clinical trials. Methods and Results Parents completed the generic Child Health Questionnaire (CHQ) for 511 Fontan Cross-Sectional Study patients aged 6–18 years (61% male). Associations of CHQ Physical and Psychosocial Functioning Summary Scores (FSS) with standardized measurements from prospective exercise testing, echocardiography, magnetic resonance imaging (MRI), and measurement of brain natriuretic peptide (BNP) were determined by regression analyses. For exercise variables for maximal effort patients only, the final model showed higher Physical FSS was associated only with higher maximum work rate, accounting for 9% of variation in Physical FSS. For echocardiography, lower Tei index (particularly for patients with extracardiac lateral tunnel connections), lower indexed end-systolic volume, and the absence of atrioventricular valve regurgitation for patients having Fontan at age <2 years were associated with higher Physical FSS, accounting for 14% of variation in Physical FSS. For MRI, lower mass to end-diastolic volume ratio, and mid-quartiles of indexed end-systolic volume (non-linear) were associated with higher Physical FSS, accounting for 11% of variation. Lower BNP was significantly but weakly associated with higher Physical FSS (1% of variation). Significant associations for Psychosocial FSS with laboratory measures were fewer and weaker than for Physical FSS. Conclusions In relatively healthy Fontan patients, laboratory measures account for a small proportion of the variation in functional health status and, therefore, may not be optimal surrogate endpoints for trials of therapeutic interventions. PMID:20026781
Wentlandt, Kirsten; Bracaglia, Andrea; Drummond, James; Handren, Lindsay; McCann, Joshua; Clarke, Catherine; Degendorfer, Niki; Chan, Charles K
2015-12-22
The Physician Quality Improvement Initiative (PQII) uses a well-established multi-source feedback program, and incorporates an additional facilitated feedback review with their department chief. The purpose of this mixed methods study was to examine the value of the PQII by eliciting feedback from various stakeholders. All participants and department chiefs (n = 45) were invited to provide feedback on the project implementation and outcomes via survey and/or an interview. The survey consisted of 12 questions focused on the value of the PQII, it's influence on practice and the promotion of quality improvement and accountability. A total of 5 chiefs and 12 physician participants completed semi structured interviews. Participants found the PQII process, report and review session helpful, self-affirming or an opportunity for self-reflection, and an opportunity to engage their leaders about their practice. Chiefs indicated the sessions strengthened their understanding, ability to communicate and engage physicians about their practice, best practices, quality improvement and accountability. Thirty participants (66.7 %) completed the survey; of the responders 75.9, 89.7, 86.7 % found patient, co-worker, and physician colleague feedback valuable, respectively. A total of 67.9 % valued their facilitated review with their chief and 55.2 % indicated they were contemplating change due to their feedback. Participants believed the PQII promoted quality improvement (27/30, 90.0 %), and accountability (28/30, 93.3 %). The PQII provides an opportunity for physician development, affirmation and reflection, but also a structure to further departmental quality improvement, best practices, and finally, an opportunity to enhance communication, accountability and relationships between the organization, department chiefs and their staff.
NASA Astrophysics Data System (ADS)
Ramaswami, Anu; Chavez, Abel
2013-09-01
Three broad approaches have emerged for energy and greenhouse gas (GHG) accounting for individual cities: (a) purely in-boundary source-based accounting (IB); (b) community-wide infrastructure GHG emissions footprinting (CIF) incorporating life cycle GHGs (in-boundary plus trans-boundary) of key infrastructures providing water, energy, food, shelter, mobility-connectivity, waste management/sanitation and public amenities to support community-wide activities in cities—all resident, visitor, commercial and industrial activities; and (c) consumption-based GHG emissions footprints (CBF) incorporating life cycle GHGs associated with activities of a sub-set of the community—its final consumption sector dominated by resident households. The latter two activity-based accounts are recommended in recent GHG reporting standards, to provide production-dominated and consumption perspectives of cities, respectively. Little is known, however, on how to normalize and report the different GHG numbers that arise for the same city. We propose that CIF and IB, since they incorporate production, are best reported per unit GDP, while CBF is best reported per capita. Analysis of input-output models of 20 US cities shows that GHGCIF/GDP is well suited to represent differences in urban energy intensity features across cities, while GHGCBF/capita best represents variation in expenditures across cities. These results advance our understanding of the methods and metrics used to represent the energy and GHG performance of cities.
Exploring the Functioning of Decision Space: A Review of the Available Health Systems Literature
Roman, Tamlyn Eslie; Cleary, Susan; McIntyre, Diane
2017-01-01
Background: The concept of decision space holds appeal as an approach to disaggregating the elements that may influence decision-making in decentralized systems. This narrative review aims to explore the functioning of decision space and the factors that influence decision space. Methods: A narrative review of the literature was conducted with searches of online databases and academic journals including PubMed Central, Emerald, Wiley, Science Direct, JSTOR, and Sage. The articles were included in the review based on the criteria that they provided insight into the functioning of decision space either through the explicit application of or reference to decision space, or implicitly through discussion of decision-making related to organizational capacity or accountability mechanisms. Results: The articles included in the review encompass literature related to decentralisation, management and decision space. The majority of the studies utilise qualitative methodologies to assess accountability mechanisms, organisational capacities such as finance, human resources and management, and the extent of decision space. Of the 138 articles retrieved, 76 articles were included in the final review. Conclusion: The literature supports Bossert’s conceptualization of decision space as being related to organizational capacities and accountability mechanisms. These functions influence the decision space available within decentralized systems. The exact relationship between decision space and financial and human resource capacities needs to be explored in greater detail to determine the potential influence on system functioning. PMID:28812832
26 CFR 1.164-1 - Deduction for taxes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the taxable year within which paid or accrued, according to the method of accounting used in computing... thereto, during the taxable year even though the taxpayer uses the accrual method of accounting for other... for the taxable year in which paid or accrued, according to the method of accounting used in computing...
7 CFR 1942.128 - Borrower accounting methods, management reports and audits.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 13 2010-01-01 2009-01-01 true Borrower accounting methods, management reports and... Rescue and Other Small Community Facilities Projects § 1942.128 Borrower accounting methods, management... under Public Law 103-354 1942-53, “Cash Flow Report,” instead of page one of schedule one and schedule...
NASA Astrophysics Data System (ADS)
O, Sungmin; Foelsche, Ulrich; Kirchengast, Gottfried; Fuchsberger, Juergen; Tan, Jackson; Petersen, Walter A.
2017-12-01
The Global Precipitation Measurement (GPM) Integrated Multi-satellite Retrievals for GPM (IMERG) products provide quasi-global (60° N-60° S) precipitation estimates, beginning March 2014, from the combined use of passive microwave (PMW) and infrared (IR) satellites comprising the GPM constellation. The IMERG products are available in the form of near-real-time data, i.e., IMERG Early and Late, and in the form of post-real-time research data, i.e., IMERG Final, after monthly rain gauge analysis is received and taken into account. In this study, IMERG version 3 Early, Late, and Final (IMERG-E,IMERG-L, and IMERG-F) half-hourly rainfall estimates are compared with gauge-based gridded rainfall data from the WegenerNet Feldbach region (WEGN) high-density climate station network in southeastern Austria. The comparison is conducted over two IMERG 0.1° × 0.1° grid cells, entirely covered by 40 and 39 WEGN stations each, using data from the extended summer season (April-October) for the first two years of the GPM mission. The entire data are divided into two rainfall intensity ranges (low and high) and two seasons (warm and hot), and we evaluate the performance of IMERG, using both statistical and graphical methods. Results show that IMERG-F rainfall estimates are in the best overall agreement with the WEGN data, followed by IMERG-L and IMERG-E estimates, particularly for the hot season. We also illustrate, through rainfall event cases, how insufficient PMW sources and errors in motion vectors can lead to wide discrepancies in the IMERG estimates. Finally, by applying the method of Villarini and Krajewski (2007), we find that IMERG-F half-hourly rainfall estimates can be regarded as a 25 min gauge accumulation, with an offset of +40 min relative to its nominal time.
78 FR 23634 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... approval of continuing professional education programs and the renewal of the enrollment status for those... Accounting Under Section 448(d)(5). Abstract: Final regulations provide four safe harbor nonaccrual...
Self-defense: Deflecting Deflationary and Eliminativist Critiques of the Sense of Ownership.
Gallagher, Shaun
2017-01-01
I defend a phenomenological account of the sense of ownership as part of a minimal sense of self from those critics who propose either a deflationary or eliminativist critique. Specifically, I block the deflationary critique by showing that in fact the phenomenological account is itself a deflationary account insofar as it takes the sense of ownership to be implicit or intrinsic to experience and bodily action. I address the eliminativist view by considering empirical evidence that supports the concept of pre-reflective self-awareness, which underpins the sense of ownership. Finally, I respond to claims that phenomenology does not offer a positive account of the sense of ownership by showing the role it plays in an enactivist (action-oriented) view of embodied cognition.
Choice by value encoding and value construction: processes of loss aversion.
Willemsen, Martijn C; Böckenholt, Ulf; Johnson, Eric J
2011-08-01
Loss aversion and reference dependence are 2 keystones of behavioral theories of choice, but little is known about their underlying cognitive processes. We suggest an additional account for loss aversion that supplements the current account of the value encoding of attributes as gains or losses relative to a reference point, introducing a value construction account. Value construction suggests that loss aversion results from biased evaluations during information search and comparison processes. We develop hypotheses that identify the influence of both accounts and examine process-tracing data for evidence. Our data suggest that loss aversion is the result of the initial direct encoding of losses that leads to the subsequent process of directional comparisons distorting attribute valuations and the final choice.
Kant on mental disorder. Part 2: philosophical implications of Kant's account.
Frierson, Patrick
2009-09-01
This paper considers various philosophical problems arising from Kant's account of mental disorder. Starting with the reasons why Kant considered his theory of mental disorder important, I then turn to the implications of this theory of Kant's metaphysics, epistemology and ethics. Given Kant's account of insanity as 'a totally different standpoint... from which one sees all objects differently' (7: 216), the Critique of Pure Reason should be read as offering a more social epistemology than typically recognized. Also, mental disorders that seem to undermine human freedom and rationality raise problems for Kant's moral philosophy that his pragmatic anthropology helps to mitigate. Finally, I propose some implications of Kant's account of mental disorder for contemporary work on mental illness.
Final-state interactions in semi-inclusive deep inelastic scattering off the Deuteron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wim Cosyn, Misak Sargsian
2011-07-01
Semi-inclusive deep inelastic scattering off the Deuteron with production of a slow nucleon in recoil kinematics is studied in the virtual nucleon approximation, in which the final state interaction (FSI) is calculated within general eikonal approximation. The cross section is derived in a factorized approach, with a factor describing the virtual photon interaction with the off-shell nucleon and a distorted spectral function accounting for the final-state interactions. One of the main goals of the study is to understand how much the general features of the diffractive high energy soft rescattering accounts for the observed features of FSI in deep inelasticmore » scattering (DIS). Comparison with the Jefferson Lab data shows good agreement in the covered range of kinematics. Most importantly, our calculation correctly reproduces the rise of the FSI in the forward direction of the slow nucleon production angle. By fitting our calculation to the data we extracted the W and Q{sup 2} dependences of the total cross section and slope factor of the interaction of DIS products, X, off the spectator nucleon. This analysis shows the XN scattering cross section rising with W and decreasing with an increase of Q{sup 2}. Finally, our analysis points at a largely suppressed off-shell part of the rescattering amplitude.« less
Nearest neighbor-density-based clustering methods for large hyperspectral images
NASA Astrophysics Data System (ADS)
Cariou, Claude; Chehdi, Kacem
2017-10-01
We address the problem of hyperspectral image (HSI) pixel partitioning using nearest neighbor - density-based (NN-DB) clustering methods. NN-DB methods are able to cluster objects without specifying the number of clusters to be found. Within the NN-DB approach, we focus on deterministic methods, e.g. ModeSeek, knnClust, and GWENN (standing for Graph WatershEd using Nearest Neighbors). These methods only require the availability of a k-nearest neighbor (kNN) graph based on a given distance metric. Recently, a new DB clustering method, called Density Peak Clustering (DPC), has received much attention, and kNN versions of it have quickly followed and showed their efficiency. However, NN-DB methods still suffer from the difficulty of obtaining the kNN graph due to the quadratic complexity with respect to the number of pixels. This is why GWENN was embedded into a multiresolution (MR) scheme to bypass the computation of the full kNN graph over the image pixels. In this communication, we propose to extent the MR-GWENN scheme on three aspects. Firstly, similarly to knnClust, the original labeling rule of GWENN is modified to account for local density values, in addition to the labels of previously processed objects. Secondly, we set up a modified NN search procedure within the MR scheme, in order to stabilize of the number of clusters found from the coarsest to the finest spatial resolution. Finally, we show that these extensions can be easily adapted to the three other NN-DB methods (ModeSeek, knnClust, knnDPC) for pixel clustering in large HSIs. Experiments are conducted to compare the four NN-DB methods for pixel clustering in HSIs. We show that NN-DB methods can outperform a classical clustering method such as fuzzy c-means (FCM), in terms of classification accuracy, relevance of found clusters, and clustering speed. Finally, we demonstrate the feasibility and evaluate the performances of NN-DB methods on a very large image acquired by our AISA Eagle hyperspectral imaging sensor.
A multiscale model for predicting the viscoelastic properties of asphalt concrete
NASA Astrophysics Data System (ADS)
Garcia Cucalon, Lorena; Rahmani, Eisa; Little, Dallas N.; Allen, David H.
2016-08-01
It is well known that the accurate prediction of long term performance of asphalt concrete pavement requires modeling to account for viscoelasticity within the mastic. However, accounting for viscoelasticity can be costly when the material properties are measured at the scale of asphalt concrete. This is due to the fact that the material testing protocols must be performed recursively for each mixture considered for use in the final design.
Irondequoit Creek Watershed New York, Final Feasibility Report and Environmental Impact Statement.
1982-03-01
National Flood Insurance Program 58 8 System of Accounts 95 9 Summary of Benefits and Costs 96 10 Summary of Average Annual Benefits - Selected Plan 112...material, velocity distribution, vegetation, soil type, topography, and especially rainfall regime, where a few intense storms can account for severe...Alternative B is described later in this report. Flood Insurance - Flood insurance provides some financial protection to vic- tims of flood related
ERIC Educational Resources Information Center
Pecorella, Patricia A.; Bowers, David G.
Analyses preparatory to construction of a suitable file for generating a system of future performance trend indicators are described. Such a system falls into the category of a current value approach to human resources accounting. It requires that there be a substantial body of data which: (1) uses the work group or unit, not the individual, as…
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E
2016-06-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.
Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161
Code of Federal Regulations, 2010 CFR
2010-04-01
... the taxpayer is changed to a method proper under the accrual method of accounting, then the taxpayer may elect to have such change treated as not a change in method of accounting to which the provisions... recomputed under a proper method of accounting for dealer reserve income for each taxable year to which the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devaraj, Arun; Prabhakaran, Ramprashad; Joshi, Vineet V.
2016-04-12
The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U-10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U-10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U-10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content
Code of Federal Regulations, 2011 CFR
2011-01-01
... liability account balances at a failed insured depository institution. 360.8 Section 360.8 Banks and Banking... RECEIVERSHIP RULES § 360.8 Method for determining deposit and other liability account balances at a failed... FDIC will use to determine deposit and other liability account balances for insurance coverage and...
Code of Federal Regulations, 2010 CFR
2010-01-01
... liability account balances at a failed insured depository institution. 360.8 Section 360.8 Banks and Banking... RECEIVERSHIP RULES § 360.8 Method for determining deposit and other liability account balances at a failed... FDIC will use to determine deposit and other liability account balances for insurance coverage and...
26 CFR 1.451-5 - Advance payments for goods and long-term contracts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... accounting for tax purposes if such method results in including advance payments in gross receipts no later... the case of a taxpayer accounting for advance payments for tax purposes pursuant to a long-term contract method of accounting under § 1.460-4, or of a taxpayer accounting for advance payments with...
ERIC Educational Resources Information Center
Abeysekera, Indra
2015-01-01
The role of work-integrated learning in student preferences of instructional methods is largely unexplored across the accounting curriculum. This study conducted six experiments to explore student preferences of instructional methods for learning, in six courses of the accounting curriculum that differed in algorithmic rigor, in the context of a…
Code of Federal Regulations, 2010 CFR
2010-01-01
... methods and principles of accounting prescribed by the state regulatory body having jurisdiction over the... telecommunications companies (47 CFR part 32), as those methods and principles of accounting are supplemented from... instruments by prescribing accounting principles, methodologies, and procedures applicable to all...
NASA Astrophysics Data System (ADS)
Botos, J.; Murail, N.; Heidemeyer, P.; Kretschmer, K.; Ulmer, B.; Zentgraf, T.; Bastian, M.; Hochrein, T.
2014-05-01
The typical offline color measurement on injection molded or pressed specimens is a very expensive and time-consuming process. In order to optimize the productivity and quality, it is desirable to measure the color already during the production. Therefore several systems have been developed to monitor the color e.g. on melts, strands, pellets, the extrudate or injection molded part already during the process. Different kinds of inline, online and atline methods with their respective advantages and disadvantages will be compared. The criteria are e.g. the testing time, which ranges from real-time to some minutes, the required calibration procedure, the spectral resolution and the final measuring precision. The latter ranges between 0.05 to 0.5 in the CIE L*a*b* system depending on the particular measurement system. Due to the high temperatures in typical plastics processes thermochromism of polymers and dyes has to be taken into account. This effect can influence the color value in the magnitude of some 10% and is barely understood so far. Different suitable methods to compensate thermochromic effects during compounding or injection molding by using calibration curves or artificial neural networks are presented. Furthermore it is even possible to control the color during extrusion and compounding almost in real-time. The goal is a specific developed software for adjusting the color recipe automatically with the final objective of a closed-loop control.
48 CFR 1652.232-70 - Payments-community-rated contracts.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for error or fraud, the subscription charges received for the plan by the Employees Health Benefits... Administrative Reserve. After the final accounting, OPM will place any surplus demonstration project premiums in...
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
An Improved BLE Indoor Localization with Kalman-Based Fusion: An Experimental Study
Röbesaat, Jenny; Zhang, Peilin; Abdelaal, Mohamed; Theel, Oliver
2017-01-01
Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter. PMID:28445421
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Uhrich, Mark A.; Kolasinac, Jasna; Booth, Pamela L.; Fountain, Robert L.; Spicer, Kurt R.; Mosbrucker, Adam R.
2014-01-01
Researchers at the U.S. Geological Survey, Cascades Volcano Observatory, investigated alternative methods for the traditional sample-based sediment record procedure in determining suspended-sediment concentration (SSC) and discharge. One such sediment-surrogate technique was developed using turbidity and discharge to estimate SSC for two gaging stations in the Toutle River Basin near Mount St. Helens, Washington. To provide context for the study, methods for collecting sediment data and monitoring turbidity are discussed. Statistical methods used include the development of ordinary least squares regression models for each gaging station. Issues of time-related autocorrelation also are evaluated. Addition of lagged explanatory variables was used to account for autocorrelation in the turbidity, discharge, and SSC data. Final regression model equations and plots are presented for the two gaging stations. The regression models support near-real-time estimates of SSC and improved suspended-sediment discharge records by incorporating continuous instream turbidity. Future use of such models may potentially lower the costs of sediment monitoring by reducing time it takes to collect and process samples and to derive a sediment-discharge record.
NASA Astrophysics Data System (ADS)
Valente, T.; Bartuli, C.; Sebastiani, M.; Loreto, A.
2005-12-01
The experimental measurement of residual stresses originating within thick coatings deposited by thermal spray on solid substrates plays a role of fundamental relevance in the preliminary stages of coating design and process parameters optimization. The hole-drilling method is a versatile and widely used technique for the experimental determination of residual stress in the most superficial layers of a solid body. The consolidated procedure, however, can only be implemented for metallic bulk materials or for homogeneous, linear elastic, and isotropic materials. The main objective of the present investigation was to adapt the experimental method to the measurement of stress fields built up in ceramic coatings/metallic bonding layers structures manufactured by plasma spray deposition. A finite element calculation procedure was implemented to identify the calibration coefficients necessary to take into account the elastic modulus discontinuities that characterize the layered structure through its thickness. Experimental adjustments were then proposed to overcome problems related to the low thermal conductivity of the coatings. The number of calculation steps and experimental drilling steps were finally optimized.
ERIC Educational Resources Information Center
Hosal-Akman, Nazli; Simga-Mugan, Can
2010-01-01
This study explores the effect of teaching methods on the academic performance of students in accounting courses. The study was carried out over two semesters at a well-known university in Turkey in principles of financial accounting and managerial accounting courses. Students enrolled in the courses were assigned to treatment and control groups.…
Dang, Carolyn T; Umphress, Elizabeth E; Mitchell, Marie S
2017-10-01
When providing social accounts (Sitkin & Bies, 1993) for the unethical conduct of subordinates, leaders may use language consistent with cognitive strategies described by Bandura (1991, 1999) in his work on moral disengagement. That is, leader's social accounts may reframe or reconstrue subordinates' unethical conduct such that it appears less reprehensible. We predict observers will respond negatively to leaders when they use moral disengagement language within social accounts and, specifically, observers will ostracize these leaders. In addition, we predict that observer moral disengagement propensity moderates this effect, such that the relationship between leaders' use of moral disengagement language within a social account and ostracism is stronger when observer moral disengagement propensity is lower versus higher. Finally, we predict that the reason why observers ostracize the leader is because observers perceive the leader's social account with moral disengagement language as unethical. Thus, perceived leader social account ethicality is predicted to mediate the interaction effect of leader's use of moral disengagement language within social accounts and observer moral disengagement propensity on ostracism. Results from an experiment and field study support our predictions. Implications for theory and practice are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Levy, David M; Peart, Sandra J
2008-06-01
We wish to deal with investigator bias in a statistical context. We sketch how a textbook solution to the problem of "outliers" which avoids one sort of investigator bias, creates the temptation for another sort. We write down a model of the approbation seeking statistician who is tempted by sympathy for client to violate the disciplinary standards. We give a simple account of one context in which we might expect investigator bias to flourish. Finally, we offer tentative suggestions to deal with the problem of investigator bias which follow from our account. As we have given a very sparse and stylized account of investigator bias, we ask what might be done to overcome this limitation.
2016-06-01
Government Accountability Office United States Government Accountability Office Highlights of GAO-16-569, a report to the Committee on Armed...active duty, officials told us that Military OneSource provides grief counseling, tax assistance (such as Page 9 GAO-16-569 Gold Star...Advocates assistance with filing the deceased servicemember’s final tax return), and assistance with obtaining benefits. There are also organizations
A Cost Simulation Tool for Estimating the Cost of Operating Government Owned and Operated Ships
1994-09-01
Horngren , C.T., Foster, G., Datar, S.M., Cost Accounting : A Management Emphasis, Prentice-Hall, Englewood Cliffs, NJ, 1994 IBM Corporation, A Graphical...4. TITLE AND SUBTITLE A COST SIMULATION TOOL FOR 5. FUNDING NUMBERS ESTIMATING THE COST OF OPERATING GOVERNMENT OWNED AND OPERATED SHIPS 6. AUTHOR( S ...normally does not present a problem to the accounting department. The final category, the cost of operating the government owned and operated ships is
An efficient constraint to account for mistuning effects in the optimal design of engine rotors
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe; Ottarsson, Gisli
1992-01-01
Blade-to-blade differences in structural properties, unavoidable in practice due to manufacturing tolerances, can have significant influence on the vibratory response of engine rotor blade. Accounting for these differences, also known as mistuning, in design and in optimization procedures is generally not possible. This note presents an easily calculated constraint that can be used in design and optimization procedures to control the sensitivity of final designs to mistuning.
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
On the evaluation of segmentation editing tools
Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.
2014-01-01
Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063
Computational methods for diffusion-influenced biochemical reactions.
Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G
2007-08-01
We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/
A bayesian analysis for identifying DNA copy number variations using a compound poisson process.
Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen
2010-01-01
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.