The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...
Environmental Optimization Using the WAste Reduction Algorithm (WAR)
Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...
A general theory known as the WAste Reduction (WAR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory defines potential environmental impact indexes that characterize the generation and t...
A general theory known as the Waste Reduction (WAR) Algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. The theory defines indexes that characterize the generation and the output of potential environm...
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
WAR DSS: A DECISION SUPPORT SYSTEM FOR ENVIRONMENTALLY CONSCIOUS CHEMICAL PROCESS DESIGN
The second generation of the Waste Reduction (WAR) Algorithm is constructed as a decision support system (DSS) in the design of chemical manufacturing facilities. The WAR DSS is a software tool that can help reduce the potential environmental impacts (PEIs) of industrial chemical...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilaly, A.K.; Sikdar, S.K.
In this study, the authors introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appropriate waste reduction tasks at designated levels of the hierarchy. A sensitivity coefficient was used to measure the relative impacts of process variables on the pollution index of a process. The use of the WAR algorithm was demonstrated by a fermentation process for making penicillin.
A general theory known as the WAste Reduction (WASR) algorithm has been developed to describe the flow and the generation of potential environmental impact through a chemical process. This theory integrates environmental impact assessment into chemical process design Potential en...
chemical process designers using simulation software generate alternative designs for one process. One criterion for evaluating these designs is their potential for adverse environmental impacts due to waste generated, energy consumed, and possibilities for fugitive emissions. Co...
In this study, we introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appro...
ANALYZING ENVIRONMENTAL IMPACTS WITH THE WAR ALGORITHM: REVIEW AND UPDATE
This presentation will review uses of the WAR algorithm and current developments and possible future directions. The WAR algorithm is a methodology for analyzing potential environmental impacts of 1600+ chemicals used in the chemical processing and other industries. The algorithm...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, C.S.
1988-02-01
Projections of levels of radioactive fallout from a nuclear war are sensitive to assumptions about the structure of the nuclear stockpiles as well as the assumed scenarios for a nuclear war. Recent arms control proposals would change these parameters. This paper examines the implications of the proposed (Intermediate-range Nuclear Forces) INF treaty and (Strategic Arms Reduction Treaty) START on fallout projections from a major nuclear war. We conclude that the INF reductions are likely to have negligible effects on estimates of global and local fallout, whereas the START reductions could result in reductions in estimates of local fallout that rangemore » from significant to dramatic, depending upon the nature of the reduced strategic forces. Should a major war occur, projections of total fatalities from direct effects of blast, thermal radiation, a nd fallout, and the phenomenon known as nuclear winter, would not be significantly affected by INF and START initiatives as now drafted. 14 refs.« less
Radiological Effects of Nuclear War.
ERIC Educational Resources Information Center
Shapiro, Charles S.
1988-01-01
Described are the global effects of nuclear war. Discussed are radiation dosages, limited nuclear attacks, strategic arms reductions, and other results reported at the workshop on nuclear war issues in Moscow in March 1988. (CW)
The Waste Reduction Decision Support System (WAR DSS) is a Java-based software product providing comprehensive modeling of potential adverse environmental impacts (PEI) predicted to result from newly designed or redesigned chemical manufacturing processes. The purpose of this so...
Image compression evaluation for digital cinema: the case of Star Wars: Episode II
NASA Astrophysics Data System (ADS)
Schnuelle, David L.
2003-05-01
A program of evaluation of compression algorithms proposed for use in a digital cinema application is described and the results presented in general form. The work was intended to aid in the selection of a compression system to be used for the digital cinema release of Star Wars: Episode II, in May 2002. An additional goal was to provide feedback to the algorithm proponents on what parameters and performance levels the feature film industry is looking for in digital cinema compression. The primary conclusion of the test program is that any of the current digital cinema compression proponents will work for digital cinema distribution to today's theaters.
Sadeh, Avi; Hen-Gal, Shai; Tikotzky, Liat
2008-01-01
The goal was to assess stress reactions in young children during and after war and the effects of a new brief intervention. Two separate studies were conducted. In study I, we assessed war exposure and stress reactions of 74 children (2-7 years of age) in a sheltered camp during the second Israel-Lebanon war (July to August 2006). Their exposure to war experiences and their stress reactions were assessed through parental reports during the last week of the war. In addition to standard care, 35 children received a brief intervention (Huggy-Puppy intervention) aimed at encouraging them to care for a needy Huggy-Puppy doll that was given to them as a gift. The effects of the Huggy-Puppy intervention were assessed in a follow-up interview 3 weeks after the war. Study II assessed the efficacy of group administration of the Huggy-Puppy intervention to 191 young children, compared with 101 control subjects. The effects of the intervention on stress-related symptoms after the war were assessed in telephone interviews with the parents. Study I indicated that, during the war, most children had significant exposure to war-related experiences and had severe stress reactions. The Huggy-Puppy intervention was associated with significant reductions in stress reactions in the postwar assessment. A higher level of attachment and involvement with the doll was associated with better outcomes. The results of study II indicated that group administration of the Huggy-Puppy intervention was associated with significant reductions in stress reactions. These studies suggest that the Huggy-Puppy intervention may offer pediatricians and other child health care professionals a promising, cost-effective intervention for children during stressful times.
Gulf War Illness Inflammation Reduction Trial
2015-10-01
study comparing blood samples from Gulf War veterans with and without multiple symptoms of pain, fatigue, and cognitive dysfunction. The goal of the...pilot study was to identify a potential therapeutic target for the treatment of GWI. Examination to the peripheral blood revealed the biomarker...understood. Therefore, we performed a pilot study comparing blood samples from Gulf War veterans who very GWI- with blood 6 from veterans who were
Gulf War Illness Inflammation Reduction Trial
2016-10-01
the Kuwaiti Theater of Operations during Operation Desert Shield and Operation Desert Storm (Gulf War). Many veterans of this conflict now suffer...complete blood count with differential, plasma proteomics, platelet function studies, and the measurement of multiple coagulation parameters. The
Decadal reduction of Chinese agriculture after a regional nuclear war
NASA Astrophysics Data System (ADS)
Xia, Lili; Robock, Alan; Mills, Michael; Stenke, Andrea; Helfand, Ira
2015-02-01
A regional nuclear war between India and Pakistan could decrease global surface temperature by 1°C-2°C for 5-10 years and have major impacts on precipitation and solar radiation reaching Earth's surface. Using a crop simulation model forced by three global climate model simulations, we investigate the impacts on agricultural production in China, the largest grain producer in the world. In the first year after the regional nuclear war, a cooler, drier, and darker environment would reduce annual rice production by 30 megaton (Mt) (29%), maize production by 36 Mt (20%), and wheat production by 23 Mt (53%). With different agriculture management—no irrigation, auto irrigation, 200 kg/ha nitrogen fertilizer, and 10 days delayed planting date—simulated national crop production reduces 16%-26% for rice, 9%-20% for maize, and 32%-43% for wheat during 5 years after the nuclear war event. This reduction of food availability would continue, with gradually decreasing amplitude, for more than a decade. Assuming these impacts are indicative of those in other major grain producers, a nuclear war using much less than 1% of the current global arsenal could produce a global food crisis and put a billion people at risk of famine.
Vitamin A supplementation during war-emergency in Guinea-Bissau 1998-1999.
Nielsen, Jens; Benn, Christine S; Balé, Carlitos; Martins, Cesario; Aaby, Peter
2005-03-01
Vitamin A supplementation is recommended by WHO in emergency situations. To evaluate the impact of Vitamin A supplementation on childhood mortality in an emergency situation. Since this was not a randomised study, we evaluated the impact in different ways; we used the variation in the delay of provision of Vitamin A in a step-wedged design, compared wartime with pre-wartime mortality and examined whether Vitamin A as a free commodity reduced cultural and social-economic inequalities in childhood mortality. 5926 children 6 months to 5 years of age, resident in four suburbs in the capital of Guinea-Bissau between October 1, 1998 and March 31, 1999. From October 1, 1998 until the end of the war in 1999 all children present in the study area were offered Vitamin A at regular three-monthly visits to their homes. Using the variation in the provision of Vitamin A, we found a slight non-significant reduction in mortality for children between 6 months and 5 years of age (mortality ratio (MR) 0.49; 95% CI 0.09-2.70). Comparing with a three-year period before the war, children offered Vitamin A at home during the war had a 12% reduction in mortality (MR 0.88; 0.41-1.87), whereas the overall impact of the war was an 89% increase in mortality (MR 1.89; 1.32-2.71). Vitamin A supplementation was associated with a reduction in cultural and socio-economic inequalities. Vitamin A supplementation may have a beneficial impact on childhood mortality in an emergency situation.
The Olympics and harm reduction?
2012-01-01
The current anti-doping policy (‘war on doping’) resembles the ‘war on drugs’ in several aspects, including a zero-tolerance approach, ideology encroaching on human rights and public health principles, high cost using public money for repression and control, and attempts to shape internationally harmonized legal frameworks to attain its aim. Furthermore, even if for different reasons, both wars seem not to be able to attain their objectives, and possibly lead to more harm to society than they can prevent. The Olympic buzz is mounting and we can expect multiple headlines in the media on doping and anti-doping stories related to this event. In this article we describe current anti-doping policy, reflect on its multiple unplanned consequences, and end with a discussion, if lessons learned from harm reduction experiences in the illicit drugs field could be applied to anti-doping. PMID:22788912
Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.
Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario
2016-01-01
Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.
NASA Astrophysics Data System (ADS)
El-Shobokshy, Mohammad S.; Al-Saedi, Yaseen G.
This paper investigates some of the air pollution problems which have been created as a result of the Gulf war in early 1991. Temporary periods of increased dust storm activity have been observed in Saudi Arabia. This is presumably due to disturbance of the desert surface by the extremely large number of tanks and other war machines before and during the war. The concentrations of inhalable dust particles (<15 μm) increased during the months just after the war by a factor of about 1.5 of their values during the same months of the previous year, 1990. The total horizontal solar energy flux in Riyadh has been significantly reduced during dry days with no clouds. This is attributed to the presence of soot particles, which have been generated at an extremely high rate from the fired oil fields in Kuwait. The direct normal solar insolation were also measured at the photovoltaic solar power plant in Riyadh during these days and significant reductions were observed due to the effective absorption of solar radiation by soot particles. The generated power from the plant has been reduced during days with a polluted atmosphere by about 50-80% of the expected value for such days, if the atmosphere were dry and clear.
NASA Astrophysics Data System (ADS)
Almeida, Miguel; Hildmann, Hanno; Solmaz, Gürkan
2017-08-01
Unmanned Aerial Vehicles (UAVs) have been used for reconnaissance and surveillance missions as far back as the Vietnam War, but with the recent rapid increase in autonomy, precision and performance capabilities - and due to the massive reduction in cost and size - UAVs have become pervasive products, available and affordable for the general public. The use cases for UAVs are in the areas of disaster recovery, environmental mapping & protection and increasingly also as extended eyes and ears of civil security forces such as fire-fighters and emergency response units. In this paper we present a swarm algorithm that enables a fleet of autonomous UAVs to collectively perform sensing tasks related to environmental and rescue operations and to dynamically adapt to e.g. changing resolution requirements. We discuss the hardware used to build our own drones and the settings under which we validate the proposed approach.
Nemesis Autonomous Test System
NASA Technical Reports Server (NTRS)
Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.
2012-01-01
A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.
Green, L W
1997-01-01
In this issue (see pages 187 to 191) Dr. Vivian H. Hamilton and associates demonstrate that tax reductions introduced in 5 Canadian provinces in 1994 slowed the rate of decline in cigarette consumption in those jurisdictions. Although both reductions and increases in taxation have been shown to influence tobacco consumption, changes in smoking habits must also be understood in the context of battles being waged on other fronts in the tobacco wars. In addition, more finely detailed analyses are needed to determine the impact of taxation and other factors on the smoking habits of specific subgroups of the population, particularly teenagers. PMID:9012722
Socio-Culturally Oriented Plan Discovery Environment (SCOPE)
2005-05-01
U.S. intelligence methods (Dr. George Friedman ( 2003 ) Saddam Hussein and the Dollar War. THE STRATFOR WEEKLY 18 December) 8 2.2. Evidence... 2003 ). In the EAGLE setting, we are using a modified version of the fuzzy segmentation algorithm developed by Udupa and his associates to...based (Fu, et al, 2003 ) and a cognitive model based (Eilbert, et al., 2002) algorithms, and a method for combining the results. (The method for
DESIGNING PROCESSES FOR ENVIRONMENTAL PROBLEMS
Designing for the environment requires consideration of environmental impacts. The Generalized WAR Algorithm is the methodology that allows the user to evaluate the potential environmental impact of the design of a chemical process. In this methodology, chemicals are assigned val...
N-Dimensional LLL Reduction Algorithm with Pivoted Reflection
Deng, Zhongliang; Zhu, Di
2018-01-01
The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
Ready, David J; Thomas, Kaprice R; Worley, Virginia; Backscheider, Andrea G; Harvey, Leigh Anne C; Baltzell, David; Rothbaum, Barbara Olasov
2008-04-01
Group-based exposure therapy (GBET) was field-tested with 102 veterans with war-related posttraumatic stress disorder (PTSD). Nine to 11 patients attended 3 hours of group therapy per day twice weekly for 16-18 weeks. Stress management and a minimum of 60 hours of exposure was included (3 hours of within-group war-trauma presentations per patient, 30 hours of listening to recordings of patient's own war-trauma presentations and 27 hours of hearing other patients' war-trauma presentations). Analysis of assessments conducted by treating clinicians pre-, post- and 6-month posttreatment suggests that GBET produced clinically significant and lasting reductions in PTSD symptoms for most patients on both clinician symptoms ratings (6-month posttreatment effect size delta = 1.22) and self-report measures with only three dropouts.
2016-09-01
identification and tracking algorithm. 14. SUBJECT TERMS unmanned ground vehicles , pure pursuit, vector field histogram, feature recognition 15. NUMBER OF...located within the various theaters of war. The pace for the development and deployment of unmanned ground vehicles (UGV) was, however, not keeping...DEVELOPMENT OF UNMANNED GROUND VEHICLES The development and fielding of UGVs in an operational role are not a new concept in the battlefield. In
Association of Islamic Prayer with Psychological Stability in Bosnian War Veterans.
Pajević, Izet; Sinanović, Osman; Hasanović, Mevludin
2017-12-01
To compare the outcomes among war veterans who pray/do not pray and who were not suffering mental disorders after the Bosnia-Herzegovina war (1992-95). The sample consists of 100 healthy Bosnian war veterans divided in two equal groups-one, a highly religious group inside which were individuals who perform five obligatory prayers every day, and another group of individuals who do not practice any daily prayer. We used Minnesota Multiphase Personal Inventory (MMPI), Profile Index of Emotions (PIE) and Life Style Questionnaire (LSQ). War veterans who prayed had significantly higher levels for: incorporation, self-protection, and for reactive formation; but significantly lower levels for regression, compensation, transfer, no-controlling, oppositional and aggressiveness than their peers who did not pray. Practicing religion (regular performing daily prayers) is associated with reduction of tendencies towards the tendency for risk, impulsiveness, and aggression. It is also associated with successful overcoming of emotional conflicts in war veterans who practiced religion than their peers who did not practice religion.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
26 CFR 1.961-2 - Reduction in basis of stock in foreign corporations and of other property.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Reduction in basis of stock in foreign... Corporations § 1.961-2 Reduction in basis of stock in foreign corporations and of other property. (a) Reduction... excluded amount, by the sum of the amount so excluded and any income, war profits, or excess profits taxes...
Psychological Resilience: Preparing our Soldiers for War
2011-03-23
Reduction, Suicide Prevention, Report 2010, 26. 27 10 Tanielian, ―Stop Loss,‖ 1. 11 Will Durant , The Story of Civilization: The Life of Greece (New...mental health support. This paper addresses the issue of psychologically preparing our Soldiers for War; building psychological resilience. It will also...resilience. It will also address how the human dimension of leadership can assist in achieving this effect. PSYCHOLOGICAL RESILIENCE: PREPARING OUR
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
Belskikh, A N; Basharin, V A; Chepur, S V; Khalimov, Yu Sh; Markizova, N F
2015-08-01
The article describes the way medical service dealed with problems resulted from the use of chemical weapons during the First World War (1914-1918). It was revealed that many of the abovementioned problems remain unsolved up to the present moment. It is stated the existence of the threat of use of chemical weapons in modem military conflicts, which expands the area of responsibility for medical chemical protection. The authors proved necessity and algorithm of the training system, considered as a part of medical protection in case of adverse factors of chemical nature.
Subjective quality of life in war-affected populations.
Matanov, Aleksandra; Giacco, Domenico; Bogic, Marija; Ajdukovic, Dean; Franciskovic, Tanja; Galeazzi, Gian Maria; Kucukalic, Abdulah; Lecic-Tosevski, Dusica; Morina, Nexhmedin; Popovski, Mihajlo; Schützwohl, Matthias; Priebe, Stefan
2013-07-02
Exposure to traumatic war events may lead to a reduction in quality of life for many years. Research suggests that these impairments may be associated with posttraumatic stress symptoms; however, wars also have a profound impact on social conditions. Systematic studies utilising subjective quality of life (SQOL) measures are particularly rare and research in post-conflict settings is scarce. Whether social factors independently affect SQOL after war in addition to symptoms has not been explored in large scale studies. War-affected community samples were recruited through a random-walk technique in five Balkan countries and through registers and networking in three Western European countries. The interviews were carried out on average 8 years after the war in the Balkans. SQOL was assessed on Manchester Short Assessment of Quality of Life--MANSA. We explored the impact of war events, posttraumatic stress symptoms and post-war environment on SQOL. We interviewed 3313 Balkan residents and 854 refugees in Western Europe. The MANSA mean score was 4.8 (SD = 0.9) for the Balkan sample and 4.7 (SD = 0.9) for refugees. In both samples participants were explicitly dissatisfied with their employment and financial situation. Posttraumatic stress symptoms had a strong negative impact on SQOL. Traumatic war events were directly linked with lower SQOL in Balkan residents. The post-war environment influenced SQOL in both groups: unemployment was associated with lower SQOL and recent contacts with friends with higher SQOL. Experiencing more migration-related stressors was linked to poorer SQOL in refugees. Both posttraumatic stress symptoms and aspects of the post-war environment independently influence SQOL in war-affected populations. Aid programmes to improve wellbeing following the traumatic war events should include both treatment of posttraumatic symptoms and social interventions.
Subjective quality of life in war-affected populations
2013-01-01
Background Exposure to traumatic war events may lead to a reduction in quality of life for many years. Research suggests that these impairments may be associated with posttraumatic stress symptoms; however, wars also have a profound impact on social conditions. Systematic studies utilising subjective quality of life (SQOL) measures are particularly rare and research in post-conflict settings is scarce. Whether social factors independently affect SQOL after war in addition to symptoms has not been explored in large scale studies. Method War-affected community samples were recruited through a random-walk technique in five Balkan countries and through registers and networking in three Western European countries. The interviews were carried out on average 8 years after the war in the Balkans. SQOL was assessed on Manchester Short Assessment of Quality of Life - MANSA. We explored the impact of war events, posttraumatic stress symptoms and post-war environment on SQOL. Results We interviewed 3313 Balkan residents and 854 refugees in Western Europe. The MANSA mean score was 4.8 (SD = 0.9) for the Balkan sample and 4.7 (SD = 0.9) for refugees. In both samples participants were explicitly dissatisfied with their employment and financial situation. Posttraumatic stress symptoms had a strong negative impact on SQOL. Traumatic war events were directly linked with lower SQOL in Balkan residents. The post-war environment influenced SQOL in both groups: unemployment was associated with lower SQOL and recent contacts with friends with higher SQOL. Experiencing more migration-related stressors was linked to poorer SQOL in refugees. Conclusion Both posttraumatic stress symptoms and aspects of the post-war environment independently influence SQOL in war-affected populations. Aid programmes to improve wellbeing following the traumatic war events should include both treatment of posttraumatic symptoms and social interventions. PMID:23819629
Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm
NASA Astrophysics Data System (ADS)
Backes, Werner; Wetzel, Susanne
In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.
Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.
Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian
2017-01-05
Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.
Joining Forces: Preparing to Fight Coalition Air War
2013-06-01
as a communications officer, he graduated from pilot training and was assigned to Dyess AFB, Texas, as a B-1 pilot. Following an operational...the reality of the deficiencies themselves. The deficiencies may require a reduction in global commitments, which might increase security risks...the Air Power Challenges of the Post -Cold War Era (Maxwell AFB, AL: Air University Press, 2011), 28. 13 Benjamin S. Lambeth, The Transformation of
Breaking the Nordic Defense Deadlock
2015-02-01
popular hopes for internation- al achievements in the disarmament field all contrib- uted to the perception among liberals in Sweden that reductions...during World War II and neither was it in her interest. Sweden was nonaligned, and adapted to the changing war situation. Strong popular support for...United Nations] was a ‘ luxury good’, only affordable because the Nordics were allowed a free ride on a security order created by the presence of an
Prevalence of Gulf war veterans who believe they have Gulf war syndrome: questionnaire study
Chalder, T; Hotopf, M; Unwin, C; Hull, L; Ismail, K; David, A; Wessely, S
2001-01-01
Objectives To determine how many veterans in a random sample of British veterans who served in the Gulf war believe they have “Gulf war syndrome,” to examine factors associated with the presence of this belief, and to compare the health status of those who believe they have Gulf war syndrome with those who do not. Design Questionnaire study asking British Gulf war veterans whether they believe they have Gulf war syndrome and about symptoms, fatigue, psychological distress, post-traumatic stress, physical functioning, and their perception of health. Participants 2961 respondents to questionnaires sent out to a random sample of 4250 Gulf war veterans (69.7%). Main outcome measure The proportion of veterans who believe they have Gulf war syndrome. Results Overall, 17.3% (95% confidence interval 15.9 to 18.7) of the respondents believed they had Gulf war syndrome. The belief was associated with the veteran having poor health, not serving in the army when responding to the questionnaire, and having received a high number of vaccinations before deployment to the Gulf. The strongest association was knowing another person who also thought they had Gulf war syndrome. Conclusions Substantial numbers of British Gulf war veterans believe they have Gulf war syndrome, which is associated with psychological distress, a high number of symptoms, and some reduction in activity levels. A combination of biological, psychological, and sociological factors are associated with the belief, and these factors should be addressed in clinical practice. What is already known on this topicThe term Gulf war syndrome has been used to describe illnesses and symptoms experienced by veterans of the 1991 Gulf warConcerns exist over the validity of Gulf war syndrome as a unique entityWhat this study adds17% of Gulf war veterans believe they have Gulf war syndromeHolding the belief is associated with worse health outcomesKnowing someone else who believes they have Gulf war syndrome and receiving more vaccinations were associated with holding the belief PMID:11532836
Nonsequential Computation and Laws of Nature.
1986-05-01
computing engines arose as a byproduct of the Manhattan Project in World War II. Broadly speaking, their purpose was to compute numerical solutions to...nature, and to representing algorithms in structures of space and time. After the Manhattan Project had been fulfilled, computer designers quickly pro
Analyzing radiation absorption difference of dental substance by using Dual CT
NASA Astrophysics Data System (ADS)
Yu, H.; Lee, H. K.; Cho, J. H.; Yang, H. J.; Ju, Y. S.
2015-07-01
The purpose of this study was to evaluate the changes of noise and computer tomography (CT) number in each dental substance, by using the metal artefact reduction algorithm; we used dual CT for this study. For the study, we produced resin, titanium, gypsum, and wax that are widely used by dentists. In addition, we made nickel to increase the artefact. While making the study materials, we made sure that there is no difficulty when inserting the substances inside phantom. In order to study, we scanned before and after using the metal artefact reduction algorithm. We conducted an average analysis of CT number and noise, before and after using the metal artefact reduction algorithm. As a result, there was no difference in CT number and noise before and after using the metal artefact reduction algorithm. However, when it comes to the noise value in each substance, wax's noise value was the lowest whereas titanium's noise value was the highest, after applying the metal artefact reduction algorithm. In nickel, CT number and noise value from artefact area showed a decreased noise value when applying the metal artefact reduction algorithm. In conclusion, we assumed that we could increase the effectiveness of CT examination by applying dual energy's metal artefact reduction algorithm.
2017-03-01
Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE...thesis. Stephanie managed to identify deficiencies that I otherwise might have overlooked. Stephanie was very flexible with helping me, regardless of...study will help to identify trends in external insurgency support by evaluating how an insurgency within Algeria managed to resist France. This case
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 9 2011-04-01 2011-04-01 false Reduction of creditable FORI taxes (for taxable...) Income from Sources Without the United States § 1.907(b)-1 Reduction of creditable FORI taxes (for..., war profits, or excess profits taxes. Section 907(b) will apply to a person regardless of whether that...
Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity
NASA Astrophysics Data System (ADS)
Chen, Lei; Kimura, Shinji
In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.
Kim, Song-Ju; Aono, Masashi; Hara, Masahiko
2010-07-01
We propose a model - the "tug-of-war (TOW) model" - to conduct unique parallel searches using many nonlocally-correlated search agents. The model is based on the property of a single-celled amoeba, the true slime mold Physarum, which maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a "nonlocal correlation" among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). This nonlocal correlation was shown to be useful for decision making in the case of a dilemma. The multi-armed bandit problem is to determine the optimal strategy for maximizing the total reward sum with incompatible demands, by either exploiting the rewards obtained using the already collected information or exploring new information for acquiring higher payoffs involving risks. Our model can efficiently manage the "exploration-exploitation dilemma" and exhibits good performances. The average accuracy rate of our model is higher than those of well-known algorithms such as the modified -greedy algorithm and modified softmax algorithm, especially, for solving relatively difficult problems. Moreover, our model flexibly adapts to changing environments, a property essential for living organisms surviving in uncertain environments.
Impacts of Geoengineering and Nuclear War on Chinese Agriculture
NASA Astrophysics Data System (ADS)
Xia, L.; Robock, A.
2011-12-01
Climate is one of the most important factors determining crop yields and world food supplies. To be well prepared for possible futures, it is necessary to study yield changes of major crops under different climate scenarios. Here we consider two situations: stratospheric sulfate geoengineering and nuclear war. Although we certainly do not advocate either scenario, we cannot exclude the possibilities: if global warming is getting worse, we might have to deliberately manipulate global temperature; if nuclear weapons still exist, we might face a nuclear war catastrophe. Since in both scenarios there would be reductions of temperature, precipitation, and insolation, which are three controlling factors on crop growth, it is important to study food supply changes under the two cases. We conducted our simulations for China, because it has the highest population and crop production in the world and it is under the strong influence of the summer monsoon, which would be altered in geoengineering and nuclear war scenarios. To examine the effects of climate changes induced by geoengineering and nuclear war on Chinese agriculture, we use the DSSAT crop model. We first evaluate the model by forcing it with daily weather data and management practices for the period 1978-2008 for all the provinces in China, and compare the results to observations of the yields of major crops in China (middle season rice, winter wheat, and maize). Then we perturbed observed weather data using climate anomalies for geoengineering and nuclear war simulations using NASA GISS ModelE. For stratospheric geoengineering, we consider the injection of 5 Tg SO2 per year into the tropical lower stratosphere. For the nuclear war scenario, we consider the effects of 5 Tg of soot that could be injected into the upper troposphere by a war between India and Pakistan using only 100 Hiroshima-size atomic bombs dropped on cities. We perturbed each year of the 31-year climate record with anomalies from each year of geoengineering and nuclear war simulations for different regions in China. Without changes of agricultural technology, we found that in both climate scenarios, the national crop production decreases, but different regions responded differently, indicating that the climate under which agriculture is conducted is a key factor to determine the impacts of geoengineering and nuclear war on agriculture. In southern China, the cooling helps the rice and maize grow. In northern China, the cooling makes the temperatures so cold that it hurts crop productivity, and in western China, the reduction of precipitation causes failed crop growth. To adapt to geoengineering and nuclear war scenarios, we could substitute crops that would grow better in the perturbed climate, increase fertilizer usage, irrigate agricultural land, change planting date, or change to seeds which are tolerant of cooler and drier climates.
Gupta, Raghav; Kim, Christopher; Agarwal, Nitin; Lieber, Bryan; Monaco, Edward A
2015-11-01
Parkinson disease (PD) is a common neurodegenerative disorder characterized by the presence of Lewy bodies and a reduction in the number of dopaminergic neurons in the substantia nigra of the basal ganglia. Common symptoms of PD include a reduction in control of voluntary movements, rigidity, and tremors. Such symptoms are marked by a severe deterioration in motor function. The causes of PD in many cases are unknown. PD has been found to be prominent in several notable people, including Adolf Hitler, the Chancellor of Germany and Führer of Nazi Germany during World War II. It is believed that Adolf Hitler suffered from idiopathic PD throughout his life. However, the effect of PD on Adolf Hitler's decision making during World War II is largely unknown. Here we examine the potential role of PD in shaping Hitler's personality and influencing his decision-making. We purport that Germany's defeat in World War II was influenced by Hitler's questionable and risky decision-making and his inhumane and callous personality, both of which were likely affected by his condition. Likewise his paranoid disorder marked by intense anti-Semitic beliefs influenced his treatment of Jews and other non-Germanic peoples. We also suggest that the condition played an important role in his eventual political decline. Copyright © 2015 Elsevier Inc. All rights reserved.
Jackson, Lydia Eckstein; Gaertner, Lowell
2010-01-01
Right-wing authoritarianism (RWA) and social dominance orientation (SDO) are associated with the approval of war as a political intervention [McFarland, 2005]. We examined whether the effects of RWA and SDO on war support are mediated by moral-disengagement mechanisms [i.e., responsibility reduction, moral justification, minimizing consequences, and dehumanizing-blaming victims; Bandura, 1999] and whether the ideologies use the mechanisms differently. Our data were consistent with the possibility that minimizing consequences (Study 1) and moral justification (Study 2) mediate the effects of RWA and SDO on approval of war. Both ideologies were positively associated with all moral-disengagement mechanism though more strongly so for RWA. Comparisons within ideologies suggest that RWA was most strongly associated with moral justification and SDO was most strongly associated with dehumanizing-blaming victims. We discuss implications and limitations.
Agriculture Impacts of Regional Nuclear Conflict
NASA Astrophysics Data System (ADS)
Xia, Lili; Robock, Alan; Mills, Michael; Toon, Owen Brian
2013-04-01
One of the major consequences of nuclear war would be climate change due to massive smoke injection into the atmosphere. Smoke from burning cities can be lofted into the stratosphere where it will have an e-folding lifetime more than 5 years. The climate changes include significant cooling, reduction of solar radiation, and reduction of precipitation. Each of these changes can affect agricultural productivity. To investigate the response from a regional nuclear war between India and Pakistan, we used the Decision Support System for Agrotechnology Transfer agricultural simulation model. We first evaluated the model by forcing it with daily weather data and management practices in China and the USA for rice, maize, wheat, and soybeans. Then we perturbed observed weather data using monthly climate anomalies for a 10-year period due to a simulated 5 Tg soot injection that could result from a regional nuclear war between India and Pakistan, using a total of 100 15 kt atomic bombs, much less than 1% of the current global nuclear arsenal. We computed anomalies using the NASA Goddard Institute for Space Studies ModelE and NCAR's Whole Atmosphere Community Climate Model (WACCM). We perturbed each year of the observations with anomalies from each year of the 10-year nuclear war simulations. We found that different regions respond differently to a regional nuclear war; southern regions show slight increases of crop yields while in northern regions crop yields drop significantly. Sensitivity tests show that temperature changes due to nuclear war are more important than precipitation and solar radiation changes in affecting crop yields in the regions we studied. In total, crop production in China and the USA would decrease 15-50% averaged over the 10 years using both models' output. Simulations forced by ModelE output show smaller impacts than simulations forced by WACCM output at the end of the 10 year period because of the different temperature responses in the two models.
Cartes, David A; Ray, Laura R; Collier, Robert D
2002-04-01
An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. ["Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection," Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.
Some lessons from NACA/NASA aerodynamic studies following World War II
NASA Technical Reports Server (NTRS)
Spearman, M. L.
1983-01-01
An historical account is presented of the new departures in aerodynamic research conducted by NACA, and subsequently NASA, as a result of novel aircraft technologies and operational regimes encountered in the course of the Second World War. The invention and initial development of the turbojet engine furnished the basis for a new speed/altitude regime in which numerous aerodynamic design problems arose. These included compressibility effects near the speed of sound, with attendant lift/drag efficiency reductions and longitudinal stability enhancements that were accompanied by a directional stability reduction. Major research initiatives were mounted in the investigation of swept, delta, trapezoidal and variable sweep wing configurations, sometimes conducted through flight testing of the 'X-series' aircraft. Attention is also given to the development of the first generation of supersonic fighter aircraft.
The Complicated Facial War Injury: Pitfalls and Mismanagement.
Abu-Sittah, Ghassan S; Baroud, Joe; Hakim, Christopher; Wakil, Cynthia
2017-01-01
The aim of this paper is to share the authors' experience in the management of complicated facial war injuries using free tissue transfer. A discussion on the most commonly encountered pitfalls in management during the acute and complicated settings is presented in an effort to raise insight on facial war wound complications. Two patients of complicated facial war injuries are presented to exemplify the pitfalls in acute and chronic management of the mandibular region in the first patient and the orbito-maxillary region in the second. The examples demonstrate free tissue transfer for early as well as late definitive reconstructions. A reconstruction algorithm or consensus regarding the optimal management plan of complicated facial war injuries is not attainable. The main principles of treatment, however, remain to decrease bacterial burden by adequate aggressive debridement followed by revisit sessions, remove of all infected hardware followed by replacement with external bony fixation if necessary and reviving the affected area by coverage with well-vascularized tissues and bone. The later is feasible via local, regional, or distant tissue transfer depending on the extent of injury, surgeon's experience, and time and personnel available. Free tissue transfer has revolutionized the management of complicated facial war injuries associated with soft tissue or bone loss as it has allowed the introduction of well-vascularized tissues into a hostile wound environment. The end result is a reduced infection rate, faster recovery time, and better functional outcome compared with when loco-regional soft tissue coverage or bone grafting is used. When soft tissue or bone loss is present, free tissue transfer should be the first management plan if time and personnel are available. The ultimate treatment of a complicated war wound remains prevention by accurate initial management.
Images, Imagination and Impact: War in Painting and Photography from Vietnam to Afghanistan
2013-06-01
society led artists to critique the foundations of the state and how the state conducts business. Institutions as well as methods like the use of...Budget, Paperwork Reduction Project (0704–0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE June 2013 3. REPORT TYPE...in the last several decades, using the example of the Vietnam War and NATO’s mission in Afghanistan as case studies. The thesis concludes that the
Optimization model of conventional missile maneuvering route based on improved Floyd algorithm
NASA Astrophysics Data System (ADS)
Wu, Runping; Liu, Weidong
2018-04-01
Missile combat plays a crucial role in the victory of war under high-tech conditions. According to the characteristics of maneuver tasks of conventional missile units in combat operations, the factors influencing road maneuvering are analyzed. Based on road distance, road conflicts, launching device speed, position requirements, launch device deployment, Concealment and so on. The shortest time optimization model was built to discuss the situation of road conflict and the strategy of conflict resolution. The results suggest that in the process of solving road conflict, the effect of node waiting is better than detour to another way. In this study, we analyzed the deficiency of the traditional Floyd algorithm which may limit the optimal way of solving road conflict, and put forward the improved Floyd algorithm, meanwhile, we designed the algorithm flow which would be better than traditional Floyd algorithm. Finally, throgh a numerical example, the model and the algorithm were proved to be reliable and effective.
Using GO-WAR for mining cross-ontology weighted association rules.
Agapito, Giuseppe; Cannataro, Mario; Guzzi, Pietro Hiram; Milano, Marianna
2015-07-01
The Gene Ontology (GO) is a structured repository of concepts (GO terms) that are associated to one or more gene products. The process of association is referred to as annotation. The relevance and the specificity of both GO terms and annotations are evaluated by a measure defined as information content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of association rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents GO-WAR (Gene Ontology-based Weighted Association Rules) a methodology for extracting weighted association rules. GO-WAR can extract association rules with a high level of IC without loss of support and confidence from a dataset of annotated data. A case study on using of GO-WAR on publicly available GO annotation datasets is used to demonstrate that our method outperforms current state of the art approaches. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
Stidd, D A; Theessen, H; Deng, Y; Li, Y; Scholz, B; Rohkohl, C; Jhaveri, M D; Moftakhar, R; Chen, M; Lopes, D K
2014-01-01
Flat panel detector CT images are degraded by streak artifacts caused by radiodense implanted materials such as coils or clips. A new metal artifacts reduction prototype algorithm has been used to minimize these artifacts. The application of this new metal artifacts reduction algorithm was evaluated for flat panel detector CT imaging performed in a routine clinical setting. Flat panel detector CT images were obtained from 59 patients immediately following cerebral endovascular procedures or as surveillance imaging for cerebral endovascular or surgical procedures previously performed. The images were independently evaluated by 7 physicians for metal artifacts reduction on a 3-point scale at 2 locations: immediately adjacent to the metallic implant and 3 cm away from it. The number of visible vessels before and after metal artifacts reduction correction was also evaluated within a 3-cm radius around the metallic implant. The metal artifacts reduction algorithm was applied to the 59 flat panel detector CT datasets without complications. The metal artifacts in the reduction-corrected flat panel detector CT images were significantly reduced in the area immediately adjacent to the implanted metal object (P = .05) and in the area 3 cm away from the metal object (P = .03). The average number of visible vessel segments increased from 4.07 to 5.29 (P = .1235) after application of the metal artifacts reduction algorithm to the flat panel detector CT images. Metal artifacts reduction is an effective method to improve flat panel detector CT images degraded by metal artifacts. Metal artifacts are significantly decreased by the metal artifacts reduction algorithm, and there was a trend toward increased vessel-segment visualization. © 2014 by American Journal of Neuroradiology.
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R
2017-03-01
To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.
Symptoms and subjective quality of life in post-traumatic stress disorder: a longitudinal study.
Giacco, Domenico; Matanov, Aleksandra; Priebe, Stefan
2013-01-01
Evidence suggests that post-traumatic stress disorder (PTSD) is associated with substantially reduced subjective quality of life (SQOL). This study aimed to explore whether and how changes in the levels of PTSD symptom clusters of intrusion, avoidance and hyperarousal are associated with changes in SQOL. Two samples with PTSD following the war in former Yugoslavia were studied, i.e. a representative sample of 530 people in five Balkan countries and a non-representative sample of 215 refugees in three Western European countries. They were assessed on average eight years after the war and re-interviewed one year later. PTSD symptoms were assessed on the Impact of Event Scale - Revised and SQOL on the Manchester Short Assessment of Quality of Life. Linear regression and a two-wave cross lagged panel analysis were used to explore the association between PTSD symptom clusters and SQOL. The findings in the two samples were consistent. Symptom reduction over time was associated with improved SQOL. In multivariable analyses adjusted for the influence of all three clusters, gender and time since war exposure, only changes in hyperarousal symptoms were significantly associated with changes in SQOL. The two-wave cross-lagged panel analysis suggested that the link between hyperarousal symptoms and SQOL is bidirectional. Low SQOL of patients with war-related PTSD is particularly associated with hyperarousal symptoms. The findings suggest a bidirectional influence: a reduction in hyperarousal symptoms may result in improved SQOL, and improvements in SQOL may lead to reduced hyperarousal symptoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graben, E.K.
1992-01-01
During the Cold War, the United States and the Soviet Union competed in building weapons -- now it seems like America and Russia are competing to get rid of them the fastest. The lengthy process of formal arms control has been replaced by exchanges of unilateral force reductions and proposals for reciprocal reductions not necessarily codified by treaty. Should superpower nuclear strategies change along with force postures President Bush has yet to make a formal pronouncement on post-Cold War American nuclear strategy, and it is uncertain if the Soviet/Russian doctrine of reasonable sufficiency formulated in the Gorbachev era actually heraldsmore » a change in strategy. Some of the provisions in the most recent round of unilateral proposals put forth by Presidents Bush and Yeltsin in January 1992 are compatible with a change in strategy. Whether such a change has actually occurred remains to be seen. With the end of the Cold War and the breakup of the Soviet Union, the strategic environment has fundamentally changed, so it would seem logical to reexamine strategy as well. There are two main schools of nuclear strategic thought: a maximalist school, mutual assured destruction (MAD) which emphasizes counterforce superiority and nuclear war- fighting capability, and a MAD-plus school, which emphasizes survivability of an assured destruction capability along with the ability to deliver small, limited nuclear attacks in the event that conflict occurs. The MAD-plus strategy is based on an attempt to conventionalize nuclear weapons which is unrealistic.« less
Identifying new diseases and their causes: the dilemma of illnesses in Gulf War veterans.
Gardner, John W; Gibbons, Robert V; Hooper, Tomoko I; Cunnion, Stephen O; Kroenke, Kurt; Gackstetter, Gary D
2003-03-01
Since the Gulf War, investigation continues of symptoms and illnesses among its veterans. Yet, identifying a specific "Gulf War Syndrome" remains elusive. With new disease entities, causal associations are relatively easily established when the condition is serious, verifiable, and has excess disease rates in specific groups. In common conditions, many excess cases are required to establish association with a specific exposure. Establishing causality in syndromes with variable symptoms is difficult because specific diagnostic algorithms must be established before causal factors can be properly investigated. Searching for an environmental cause is futile in the absence of an operational disease case definition. Common subjective symptoms (without objective physical or laboratory findings) account for over one-half of all medical outpatient visits, yet these symptoms lack an identified physical cause at least one-third of the time. Our medical care system has difficulty dealing with disorders where there is no identified anatomic abnormality or documented metabolic/physiological dysfunction.
26 CFR 1.621-1 - Payments to encourage exploration, development, and mining for defense purposes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... allowed to the taxpayer and which deduction resulted in a reduction for any taxable year of the taxpayer's... prior income, war-profits, or excess-profits tax laws. (2) Where amounts described in section 621 and... allowed to the taxpayer and which deduction resulted in a reduction for any taxable year of the taxpayer's...
The rise of active-element phased-array radar
NASA Astrophysics Data System (ADS)
Chang, Ike
The War in the Persian Gulf has recently underscored the vast leverage of advanced electronics to U.S. military power. Advanced electronics will likely play an even greater role in the U.S. military in the future. Under declining budgets, the U.S. forces are experiencing drastic reductions in manpower and resources. To offset these reductions, the military has turned to high technology in general as a force multiplier. In terms of projecting air power, a key force multiplier involves the use of electronic sensors for reconnaissance, surveillance, and tracking. One type of sensor for tactical aircraft, fire control radar, has proven to be a crucial element in establishing air superiority over potential adversaries in war. The advantages, history of development, and enabling technologies of a superior and emerging technology for fire control radars are discussed.
SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitree, R; Guzman, G; Chundury, A
Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less
Rancic, Nemanja; Erceg, Milena; Radojevic, Nemanja; Savic, Slobodan
2013-11-01
A comparative analysis of firearm homicides committed in Belgrade was performed including four representative years: 1987 (before the civil war in the Former Yugoslavia), 1991 (beginning of the war), 1997 (end of the war), and 2007 (period of social stabilization). The increase in the number of homicides was established in 1991 and 1997 compared with 1987, with the decrease in 2007, but with the continuous increase in the percentage of firearm homicides in the total number of homicides, from 12% in 1987 up to 56% in 2007. The significant increase in firearm homicides during the last decade of the 20th century can be explained by the social disturbances and the high availability of firearms, while their reduction in 2007 could be linked to the gradual stabilization of social circumstances. The results showed that the actual social, political, and economical changes strongly influenced medicolegal characteristics of homicides and particularly firearm homicides. © 2013 American Academy of Forensic Sciences.
Impacts on Chinese Agriculture of Geoengineering and Smoke from Fires Ignited by Nuclear War
NASA Astrophysics Data System (ADS)
Xia, L.; Robock, A.
2013-12-01
Climate is one of the most important factors determining crop yields and world food supplies. To be well prepared for possible futures, it is necessary to study yield changes of major crops under different climate scenarios. Here we consider two situations: stratospheric sulfate geoengineering and nuclear war. Although we certainly do not advocate either scenario, we cannot exclude the possibilities: if global warming is getting worse, society might consider deliberately manipulating global temperature; if nuclear weapons still exist, we might face a nuclear war catastrophe. Since in both scenarios there would be reductions of temperature, precipitation, and insolation, which are three controlling factors on crop growth, it is important to study food supply changes under the two cases. We conducted our simulations for China, because it has the highest population and crop production in the world and it is under the strong influence of the summer monsoon, which would be altered in geoengineering and nuclear war scenarios. To examine the effects of climate changes induced by geoengineering and nuclear war on Chinese agriculture, we use the Decision Support System for Agrotechnology Transfer (DSSAT) crop model. We first evaluated the model by forcing it with daily weather data and management practices for the period 1978-2008 for 24 provinces in China, and compared the results to observations of the yields of major crops in China (middle season rice, winter wheat, and maize). Then we perturbed observed weather data using climate anomalies for geoengineering and nuclear war simulations. For geoengineering, we consider the G2 scenario of the Geoengineering Model Intercomparison Project (GeoMIP), which prescribes an insolation reduction to balance a 1% per year increase in CO2 concentration (1pctCO2). We used results from ten climate models participating in G2. For the nuclear war scenario, we consider the effects of 5 Tg of soot that could be injected into the upper troposphere by a war between India and Pakistan using only 100 Hiroshima-size atomic bombs dropped on cities. We used results from three climate models that did the same simulation. For the geoengineering scenario, without changes of agricultural technology, the combined effect of climate changes due to geoengineering and CO2 fertilization would change rice production in China by -4.6×6.0 Mt (4.5×5.9%) as compared with 1pctCO2 and would increase Chinese maize production by 20.9×6.9 Mt (14.8×4.9%) the period 46-50 years after the CO2 increase and compensating insolation reduction began. The CO2 fertilization effect compensates for the deleterious impacts of climate changes due to geoengineering on rice production, increasing rice production by 8.2 Mt and the elevated CO2 concentration enhances maize production in G2, contributing 35.5% to the total increase. While agricultural impacts may not be a serious problem for geoengineering, there are many other potential risks that need to be evaluated before geoengineering should be considered. Climate changes due to nuclear war would decrease Chinese rice production by 20×4.7%, maize production by 15×6.2% and winter wheat production by 35×19.3% for a five-year period after the soot injection, producing a major world food security crisis.
Robsahm, T E; Tretli, S
2002-01-01
It has been suggested that World War II influenced breast cancer risk among Norwegian women by affecting adolescent growth. Diet changed substantially during the war, and the reduction in energy intake was assumed to be larger in non-food- producing than in food-producing municipalities. In the present study, we have looked at the influence of residential history in areas with and without food production on the incidence of breast cancer in a population-based cohort study consisting of 597 906 women aged between 30 and 64 years. The study included 7311 cases of breast cancer, diagnosed between 1964 and 1992. The risk estimates were calculated using a Poisson regression model. The results suggest that residential history may influence the risk of breast cancer, where the suggested advantageous effect of World War II seems to be larger in non-food-producing than in food-producing areas. Breast cancer incidence was observed to decline for the post-war cohorts, which is discussed in relation to diet. British Journal of Cancer (2002) 86, 362–366. DOI: 10.1038/sj/bjc/6600084 www.bjcancer.com © 2002 The Cancer Research Campaign PMID:11875700
Dynamics of Conflicts in Wikipedia
Yasseri, Taha; Sumi, Robert; Rung, András; Kornai, András; Kertész, János
2012-01-01
In this work we study the dynamical features of editorial wars in Wikipedia (WP). Based on our previously established algorithm, we build up samples of controversial and peaceful articles and analyze the temporal characteristics of the activity in these samples. On short time scales, we show that there is a clear correspondence between conflict and burstiness of activity patterns, and that memory effects play an important role in controversies. On long time scales, we identify three distinct developmental patterns for the overall behavior of the articles. We are able to distinguish cases eventually leading to consensus from those cases where a compromise is far from achievable. Finally, we analyze discussion networks and conclude that edit wars are mainly fought by few editors only. PMID:22745683
Chung, King
2004-01-01
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, S; Farr, J; Merchant, T
Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less
Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori
2014-03-01
The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 6038(c)(1)(B) reduction 34,000 (e) Dividend paid by N to M 45,000 (f) Accumulated profits of N as... profits described in section 902(a) (determined without regard to the reduction provided under section... foreign income, war profits, and excess profits taxes are determined on the basis of an accounting period...
Dean, J; Forsberg, R C; Mendlovitz, S
2000-01-01
At the end of history's bloodiest century and the outset of a new millennium, we have an opportunity to fulfil one of humanity's oldest dreams: making the world largely free of war. Global changes make this goal achievable. Nuclear weapons have shown the folly of war. For the first time, there is no war and no immediate prospect of war among the main military powers. For the first time, many proven measures to prevent armed conflict, distilled in the crucible of this century's wars, are available. If systematically applied, these measures can sharply decrease the frequency and violence of war, genocide, and other forms of deadly conflict. To seize the opportunity, nations should adopt a comprehensive programme to reduce conventional armaments and armed conflict. This programme will complement and strengthen efforts to eliminate nuclear arms. To assure its ongoing worldwide implementation, the conventional reduction programme should be placed in a treaty framework. We propose a four-phased process, with three treaties, each lasting five to ten years, to lay the groundwork for the fourth treaty, which will establish a permanent international security system. The main objectives of the treaties are to achieve: 1. A verified commitment to provide full transparency on conventional armed forces and military spending, not to increase forces during negotiations on arms reductions, and to increase the resources allocated to multilateral conflict prevention and peacekeeping. 2. Substantial worldwide cuts in national armed forces and military spending and further strengthening of United Nations and regional peacekeeping and peace-enforcement capabilities. 3. A trial of a watershed commitment by participating nations, including the major powers, not to deploy their armed forces beyond national borders except in a multilateral action under UN or regional auspices. 4. A permanent transfer to the UN and regional security organizations of the authority and capability for armed intervention to prevent or end war, accompanied by further substantial cuts in national armed forces and increases in UN and regional forces. This programme offers many valuable features: a global framework for conventional forces that parallels the nuclear Non-Proliferation Treaty; a verified no-increase commitment for national armed forces based on full data exchange; a commitment to undertake prescribed confidence-building measures, including limits on force activities and deployments; a commitment to a specified plan for increased funding of UN and regional peacekeeping capabilities; a commitment to strengthen international legal institutions; and after a trial period, a lasting commitment by each participant not to unilaterally deploy its armed forces beyond its borders, but instead to give the responsibility for peacekeeping and peace enforcement to international institutions. This programme of phased steps to reduce armed forces and strengthen peacekeeping institutions will make war rare. It will foster the spread of zones of peace like those in North America and Western Europe where, after centuries of violence, international and civil war have given way to the peaceful settlement of disputes.
Impaired central respiratory chemoreflex in an experimental genetic model of epilepsy
Totola, Leonardo T.; Takakura, Ana C.; Oliveira, José Antonio C.
2016-01-01
Key points It is recognized that seizures commonly cause apnoea and oxygen desaturation, but there is still a lack in the literature about the respiratory impairments observed ictally and in the post‐ictal period.Respiratory disorders may involve changes in serotonergic transmission at the level of the retrotrapezoid nucleus (RTN).In this study, we evaluated breathing activity and the role of serotonergic transmission in the RTN with a rat model of tonic–clonic seizures, the Wistar audiogenic rat (WAR).We conclude that the respiratory impairment in the WAR could be correlated to an overall decrease in the number of neurons located in the respiratory column. Abstract Respiratory disorders may involve changes in serotonergic neurotransmission at the level of the chemosensitive neurons located in the retrotrapezoid nucleus (RTN). Here, we investigated the central respiratory chemoreflex and the role of serotonergic neurotransmission in the RTN with a rat model of tonic–clonic seizures, the Wistar audiogenic rat (WAR). We found that naive or kindled WARs have reduced resting ventilation and ventilatory response to hypercapnia (7% CO2). The number of chemically coded (Phox2b+/TH−) RTN neurons, as well as the serotonergic innervation to the RTN, was reduced in WARs. We detected that the ventilatory response to serotonin (1 mm, 50 nl) within the RTN region was significantly reduced in WARs. Our results uniquely demonstrated a respiratory impairment in a genetic model of tonic–clonic seizures, the WAR strain. More importantly, we demonstrated an overall decrease in the number of neurons located in the ventral respiratory column (VRC), as well as a reduction in serotonergic neurons in the midline medulla. This is an important step forward to demonstrate marked changes in neuronal activity and breathing impairment in the WAR strain, a genetic model of epilepsy. PMID:27633663
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Peak reduction for commercial buildings using energy storage
NASA Astrophysics Data System (ADS)
Chua, K. H.; Lim, Y. S.; Morris, S.
2017-11-01
Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.
Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture
NASA Technical Reports Server (NTRS)
Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek
2015-01-01
This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.
NASA Astrophysics Data System (ADS)
Schweber, Silvan S.
2014-06-01
Some facets of the life of Hans Bethe after World War II are presented to illustrate how Paul Forman's works, and in particular his various theses—on mathematics and physics in Wilhelmine and Weimar Germany, on physics in the immediate post-World War II period, and on postmodernity—have influenced my biography of Bethe. Some aspects of the history of post-World War II quantum field theory, of solid state/condensed matter physics, and of the development of neoliberalism—the commitment to the belief that the market knows best, to free trade, to enhanced privatization, and to a drastic reduction of the government's role in regulating the economy—are reviewed in order to make some observations regarding certain "top-down" views in solid state physics in postmodernity, the economic and cultural condition of many Western societies since the 1980s, the decade in which many historians assume modernity to have ended.
Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.
Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting
2017-06-01
Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.
Gulf War Illness Inflammation Reduction Trial
2017-10-01
United States military personnel were deployed to the Kuwaiti Theater of Operations during Operation Desert Shield and Operation Desert Storm (Gulf...differential, plasma proteomics, platelet function studies, and the measurement of multiple coagulation parameters. The pilot study results provide strong
What Should This Fight Be Called? Metaphors of Counterterrorism and Their Implications.
Kruglanski, Arie W; Crenshaw, Martha; Post, Jerrold M; Victoroff, Jeff
2007-12-01
This monograph examines from a psychological perspective the use of metaphors in framing counterterrorism. Four major counterterrorism metaphors are considered, namely those of war, law enforcement, containment of a social epidemic, and a process of prejudice reduction. The war metaphor is as follows: Wars are fought by states; the enemy is thus an identifiable entity whose interests fundamentally oppose your own. The conflict is zero-sum-the outcome will be victory for one side or the other-and there is no compromise. The war metaphor is totalistic and extreme. Arguably, it was adopted in light of the immensity of damage and national hurt produced by the 9/11 attack. It has insinuated itself into the public discourse about counterterrorism, and it has guided policy, but it has also met challenges because of lack of fit and the availability of counteranalogies with different lessons of history. Some of the drawbacks of the war metaphor are addressable in the law enforcement metaphor of counterterrorism. Unlike war's special status and circumscribed duration, law enforcement is an ongoing concern that must compete for resources with other societal needs. A major advantage of law enforcement over warfare is its focused nature-targeting the actual terrorists, with less likelihood of injuring innocent parties. Yet despite its advantages, the law enforcement metaphor exhibits a partial mismatch with the realities of terrorism. Its complete and uncritical adoption may temporarily hamper terrorists' ability to launch attacks without substantially altering their motivation to do so. The public health epidemiological model was usefully applied to the epidemic of terror that followed the 9/11 attacks. It utilizes a partition between (a) an external agent, (b) a susceptible host, (c) an environment that brings them together, and (d) the vector that enables transmission of the disease. In the specific application to jihadist terrorism, the agent refers to the militant Islamist ideology, the susceptible host refers to radicalizable Muslim populations, the environment refers to conditions that promote the readiness to embrace such ideology, and the vectors are conduits whereby the ideology is propagated. The epidemiological metaphor has its own advantages over the war and law enforcement metaphors, but also limitations. Whereas the latter metaphors neglect the long-range process of ideological conversion and radicalization that creates terrorists, the epidemiological metaphor neglects the "here and now" of counterterrorism and the value of resolute strikes and intelligence-gathering activities needed to counter terrorists' concrete schemes and capabilities. Framing counterterrorism as the process of prejudice reduction addresses the interaction between two communities whose conflict may breed terrorism. This framing shifts the focus from a unilateral to a bilateral concern and acknowledges the contribution to intergroup tensions that the party targeted by terrorists may make. A major tool of prejudice reduction is the creation of positive contact between members of the conflicted groups. Efforts at prejudice reduction via positive contact need to take place in the context of a larger set of policies, such as those concerning immigration laws, educational programs, and foreign policy initiatives designed to augment the good-will-generating efforts of optimal-contact programs. For all its benefits, the prejudice-reduction framework is also not without its drawbacks. Specifically, the positive-contact notion highlights the benefits of mere human interaction; it disregards differences in ideological beliefs between the interacting parties, thereby neglecting an element that appears essential to producing their estrangement and reciprocal animosity. Too, like the epidemiological metaphor, the prejudice-reduction framing takes the long view, thereby neglecting the "here and now" of terrorism and the need to counter specific terrorist threats. Thus, each of the foregoing frameworks captures some aspects of counterterrorism's effects while neglecting others. Accordingly, an integrated approach to counterterrorism is called for, one that exploits the insights of each metaphor and avoids its pitfalls. Such an approach would maximize the likelihood of enlightened decision making concerning contemplated counterterrorist moves given the complex tradeoffs that each such move typically entails. © 2008 Association for Psychological Science.
A review on the multivariate statistical methods for dimensional reduction studies
NASA Astrophysics Data System (ADS)
Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei
2017-05-01
In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
Noise Reduction with Microphone Arrays for Speaker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Z
Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
Medical costs of war in 2035: long-term care challenges for veterans of Iraq and Afghanistan.
Geiling, James; Rosen, Joseph M; Edwards, Ryan D
2012-11-01
War-related medical costs for U.S. veterans of Iraq and Afghanistan may be enormous because of differences between these wars and previous conflicts: (1) Many veterans survive injuries that would have killed them in past wars, and (2) improvised explosive device attacks have caused "polytraumatic" injuries (multiple amputations; brain injury; severe facial trauma or blindness) that require decades of costly rehabilitation. In 2035, today's veterans will be middle-aged, with health issues like those seen in aging Vietnam veterans, complicated by comorbidities of posttraumatic stress disorder, traumatic brain injury, and polytrauma. This article cites emerging knowledge about best practices that have demonstrated cost-effectiveness in mitigating the medical costs of war. We propose that clinicians employ early interventions (trauma care, physical therapy, early post-traumatic stress disorder diagnosis) and preventive health programs (smoking cessation, alcohol-abuse counseling, weight control, stress reduction) to treat primary medical conditions now so that we can avoid treating costly secondary and tertiary complications in 2035. (We should help an amputee reduce his cholesterol and maintain his weight at age 30, rather than treating his heart disease or diabetes at age 50.) Appropriate early interventions for primary illness should preserve veterans' functional status, ensure quality clinical care, and reduce the potentially enormous cost burden of their future health care.
Haley, R W; Fleckenstein, J L; Marshall, W W; McDonald, G G; Kramer, G L; Petty, F
2000-09-01
Many complaints of Gulf War veterans are compatible with a neurologic illness involving the basal ganglia. In 12 veterans with Haley Gulf War syndrome 2 and in 15 healthy control veterans of similar age, sex, and educational level, we assessed functioning neuronal mass in both basal ganglia by measuring the ratio of N-acetyl-aspartate to creatine with proton magnetic resonance spectroscopy. Central dopamine activity was assessed by measuring the ratio of plasma homovanillic acid (HVA) and 3-methoxy-4-hydroxyphenlyglycol (MHPG). The logarithm of the age-standardized HVA/MHPG ratio was inversely associated with functioning neuronal mass in the left basal ganglia (R(2) = 0.56; F(1,27) = 33.82; P<.001) but not with that in the right (R(2) = 0. 04; F(1,26) = 1.09; P =.30). Controlling for age, renal clearances of creatinine and weak organic anions, handedness, and smoking did not substantially alter the associations. The reduction in functioning neuronal mass in the left basal ganglia of these veterans with Gulf War syndrome seems to have altered central dopamine production in a lateralized pattern. This finding supports the theory that Gulf War syndrome is a neurologic illness, in part related to injury to dopaminergic neurons in the basal ganglia.
Palgi, Yuval; Ben-Ezra, Menachem; Possick, Chaya
2012-02-01
The current study presents a pilot demonstration of a new therapeutic procedure to mitigate symptoms of post-traumatic stress disorder (PTSD). The pilot took place during the Second Lebanon War. Vulnerability and resilience statements, as well as post-traumatic symptoms, were measured among special army administrative staff (SAAS) who worked in a hospital setting during extreme and prolonged war stress. All 13 soldiers in the unit studied participated in seven group therapy intervention sessions. It was hypothesized that shifting the focus of therapeutic intervention from the scenes of the events to the personal and professional narratives of preparing for the event would change the content of the soldiers' narratives. It was believed that subtracting the number of positive statements from the number of negative statements would yield increasingly higher "resilience scores" during and after the war. It also was believed that such a change would be reflected in reduction of post-traumatic symptoms. As expected, the participants showed a decrease in vulnerability and an increase in resilience contents, as well as a decrease in traumatic symptoms during and after the war. These findings may reflect the effects of the ceasefire, the mutually supportive attitude of the participants, and the therapeutic interventions.
DECISION SUPPORT SYSTEM TO ENHANCE AND ENCOURAGE SUSTAINABLE CHEMICAL PROCESS DESIGN
There is an opportunity to minimize the potential environmental impacts (PEIs) of industrial chemical processes by providing process designers with timely data nad models elucidating environmentally favorable design options. The second generation of the Waste Reduction (WAR) algo...
Algorithm for AEEG data selection leading to wireless and long term epilepsy monitoring.
Casson, Alexander J; Yates, David C; Patel, Shyam; Rodriguez-Villegas, Esther
2007-01-01
High quality, wireless ambulatory EEG (AEEG) systems that can operate over extended periods of time are not currently feasible due to the high power consumption of wireless transmitters. Previous work has thus proposed data reduction by only transmitting sections of data that contain candidate epileptic activity. This paper investigates algorithms by which this data selection can be carried out. It is essential that the algorithm is low power and that all possible features are identified, even at the expense of more false detections. Given this, a brief review of spike detection algorithms is carried out with a view to using these algorithms to drive the data reduction process. A CWT based algorithm is deemed most suitable for use and an algorithm is described in detail and its performance tested. It is found that over 90% of expert marked spikes are identified whilst giving a 40% reduction in the amount of data to be transmitted and analysed. The performance varies with the recording duration in response to each detection and this effect is also investigated. The proposed algorithm will form the basis of new a AEEG system that allows wireless and longer term epilepsy monitoring.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2002-06-01
Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
Annual Report of the Secretary of Defense on Reserve Forces
1970-01-01
Aviation. The Southeast : Asia incremental phase-down resulted In the infusion of relatively new aircraft into the Reserve Components’ inventory in FY 70. A...Loeeoen learned tim Uth reall, employmst and smbome ent Iamctiveaton of two reserve mobile constructiom battalions in fioesl years 19t and IM49...tional War Reserve Stock) and recoverable asseto from reduct.ion of Southeast Asia operations are expected tc credit stock now on hand. Current Navy
A fuzzy reinforcement learning approach to power control in wireless transmitters.
Vengerov, David; Bambos, Nicholas; Berenji, Hamid R
2005-08-01
We address the issue of power-controlled shared channel access in wireless networks supporting packetized data traffic. We formulate this problem using the dynamic programming framework and present a new distributed fuzzy reinforcement learning algorithm (ACFRL-2) capable of adequately solving a class of problems to which the power control problem belongs. Our experimental results show that the algorithm converges almost deterministically to a neighborhood of optimal parameter values, as opposed to a very noisy stochastic convergence of earlier algorithms. The main tradeoff facing a transmitter is to balance its current power level with future backlog in the presence of stochastically changing interference. Simulation experiments demonstrate that the ACFRL-2 algorithm achieves significant performance gains over the standard power control approach used in CDMA2000. Such a large improvement is explained by the fact that ACFRL-2 allows transmitters to learn implicit coordination policies, which back off under stressful channel conditions as opposed to engaging in escalating "power wars."
Finding minimum spanning trees more efficiently for tile-based phase unwrapping
NASA Astrophysics Data System (ADS)
Sawaf, Firas; Tatam, Ralph P.
2006-06-01
The tile-based phase unwrapping method employs an algorithm for finding the minimum spanning tree (MST) in each tile. We first examine the properties of a tile's representation from a graph theory viewpoint, observing that it is possible to make use of a more efficient class of MST algorithms. We then describe a novel linear time algorithm which reduces the size of the MST problem by half at the least, and solves it completely at best. We also show how this algorithm can be applied to a tile using a sliding window technique. Finally, we show how the reduction algorithm can be combined with any other standard MST algorithm to achieve a more efficient hybrid, using Prim's algorithm for empirical comparison and noting that the reduction algorithm takes only 0.1% of the time taken by the overall hybrid.
Shcherbakov, V V
2000-01-01
The paper discusses problems in organization of identification studies under conditions of mass deaths as exemplified by forensic medical records of medical criminological identification studies of subjects killed during war conflict in Chechnya. The evolution of the organization model of identification studies is shown transformation of organization philosophy, formation of expert algorithms, formalization and technologic realization of expert solutions.
Orders of Magnitude: A History of NACA and NASA, 1915 - 1980
NASA Technical Reports Server (NTRS)
Anderson, F. W., Jr.
1981-01-01
The history of NACA and NASA from 1915 to 1980 is narrated. The impact of two world wars on aeronautics is reviewed. Research activity before and during World War II is presented. Postwar exploitation of new technologies is summarized. The creation of NASA and a comprehensive space program is discussed. Long range planning for a lunar mission is described. The Gemini project is reviewed. The Apollo project and side effects includng NASA's university and technology transfer programs are presented. Numerous scientific and application satellite projects are reviewed. The impact of budget reductions is explained. The value of space exploration is emphasized. Development of the Space Shuttle is reported.
Islamic Revival in the Balkans
2006-03-01
Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE March 2006 3. REPORT TYPE AND DATES...different from the US Global War on Terror strategic framework. 15. NUMBER OF PAGES 125 14. SUBJECT TERMS Balkans, Bosnia, Bulgaria, Global Salafi...
Random intermittent search and the tug-of-war model of motor-driven transport
NASA Astrophysics Data System (ADS)
Newby, Jay; Bressloff, Paul C.
2010-04-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model).
IMM tracking of a theater ballistic missile during boost phase
NASA Astrophysics Data System (ADS)
Hutchins, Robert G.; San Jose, Anthony
1998-09-01
Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.
NASA Astrophysics Data System (ADS)
Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki
2008-03-01
Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.
NASA Astrophysics Data System (ADS)
Huang, Ding-jiang; Ivanova, Nataliya M.
2016-02-01
In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.
Hoge, Charles W; Ivany, Christopher G; Brusher, Edward A; Brown, Millard D; Shero, John C; Adler, Amy B; Warner, Christopher H; Orman, David T
2016-04-01
The cumulative strain of 14 years of war on service members, veterans, and their families, together with continuing global threats and the unique stresses of military service, are likely to be felt for years to come. Scientific as well as political factors have influenced how the military has addressed the mental health needs resulting from these wars. Two important differences between mental health care delivered during the Iraq and Afghanistan wars and previous wars are the degree to which research has directly informed care and the consolidated management of services. The U.S. Army Medical Command implemented programmatic changes to ensure delivery of high-quality standardized mental health services, including centralized workload management; consolidation of psychiatry, psychology, psychiatric nursing, and social work services under integrated behavioral health departments; creation of satellite mental health clinics embedded within brigade work areas; incorporation of mental health providers into primary care; routine mental health screening throughout soldiers' careers; standardization of clinical outcome measures; and improved services for family members. This transformation has been accompanied by reduction in psychiatric hospitalizations and improved continuity of care. Challenges remain, however, including continued underutilization of services by those most in need, problems with treatment of substance use disorders, overuse of opioid medications, concerns with the structure of care for chronic postdeployment (including postconcussion) symptoms, and ongoing questions concerning the causes of historically high suicide rates, efficacy of resilience training initiatives, and research priorities. It is critical to ensure that remaining gaps are addressed and that knowledge gained during these wars is retained and further evolved.
Gelkopf, Marc; Lapid Pickman, Liron; Grinapol, Shulamit; Werbeloff, Nomi; Carlson, Eve B; Greene, Talya
2017-01-01
We assessed in vivo symptom courses of early psychological responses during war and investigated the influence of exposure, gender, and a prior diagnosis of severe mental illness (SMI). Participants were 181 highly exposed individuals from the general population and community psychiatric rehabilitation centers. A 30-day twice-daily Internet-smartphone-based intensive assessment two weeks into the 2014 Israel-Gaza war estimated peritraumatic symptom clusters, sense of threat, negative emotions and cognitions, and siren exposure during two periods that varied in exposure level. Piecewise growth curve modeling procedures were performed. We found different courses for most variables, gender, and SMI status. Women were more reactive two weeks into the war but reduced their reactivity level at a faster pace than males, reaching lower symptom levels one month later. Women's courses were characterized by arousal, negative emotionality, sense of threat, and reactivity to siren exposure. No-SMI men had a stable course followed by a significant reduction in arousal, negative emotions, avoidance, and perceived threat during a "return to routine" lower-level intensity period of the war. Individuals with SMI had higher reactivity levels at study onset; but while women with SMI improved over time, men with SMI worsened. SMI reactivity was characterized by negative cognitions, intrusions, and avoidance. Early reactions during prolonged exposure to war are variable, dynamic, and affected by exposure context. Symptoms, emotions, and cognitions develop differentially over time and are affected by gender and mental health status. The identification of various early stress courses should inform primary intervention strategies.
Beyond Pathologizing Harm: Understanding PTSD in the Context of War Experience.
Benner, Patricia; Halpern, Jodi; Gordon, Deborah R; Popell, Catherine Long; Kelley, Patricia W
2018-03-01
An alternative to objectifying approaches to understanding Post-traumatic Stress Disorder (PTSD) grounded in hermeneutic phenomenology is presented. Nurses who provided care for soldiers injured in the Iraq and Afghanistan wars, and sixty-seven wounded male servicemen in the rehabilitation phase of their recovery were interviewed. PTSD is the one major psychiatric diagnosis where social causation is established, yet PTSD is predominantly viewed in terms of the usual neuro-physiological causal models with traumatic social events viewed as pathogens with dose related effects. Biologic models of causation are applied reductively to both predisposing personal vulnerabilities and strengths that prevent PTSD, such as resiliency. However, framing PTSD as an objective disease state separates it from narrative historical details of the trauma. Personal stories and cultural meanings of the traumatic events are seen as epiphenomenal, unrelated to the understanding of, and ultimately, the therapeutic treatment of PTSD. Most wounded service members described classic symptoms of PTSD: flashbacks, insomnia, anxiety etc. All experienced disturbance in their sense of time and place. Rather than see the occurrence of these symptoms as decontextualized mechanistic reverberations of war, we consider how these symptoms meaningfully reflect actual war experiences and sense of displacement experienced by service members.
Structure-activity relationship study in some trifluoromethyl phenyl carboxamides
USDA-ARS?s Scientific Manuscript database
The development of new, safe public health insecticides is critical to offset the reduction of insecticide products due to concerns over the negative impacts to the ecosystem. Through funding made possible by the Deployed War-Fighter Protection program, we are developing new insecticide candidates b...
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Controller reduction by preserving impulse response energy
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Su, Tzu-Jeng
1989-01-01
A model order reduction algorithm based on a Krylov recurrence formulation is developed to reduce order of controllers. The reduced-order controller is obtained by projecting the full-order LQG controller onto a Krylov subspace in which either the controllability or the observability grammian is equal to the identity matrix. The reduced-order controller preserves the impulse response energy of the full-order controller and has a parameter-matching property. Two numerical examples drawn from other controller reduction literature are used to illustrate the efficacy of the proposed reduction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
2017 INFORMS PRIZE. The Nomination of The United States Air Force
2017-02-08
practice of market design. Shapley, a World War II veteran of the Army Air Corps who received the Bronze Star for his work in breaking a Soviet weather...theorem, the Gale-Shapley algorithm, the potential game concept, market games, authority distribution, multi-person utility, and non-atomic games...new field offices where they did not yet exist. The field OA offices were organized according to these same general principles . Some analysts were
Plastic Surgery Challenges in War Wounded I: Flap-Based Extremity Reconstruction
Sabino, Jennifer M.; Slater, Julia; Valerio, Ian L.
2016-01-01
Scope and Significance: Reconstruction of traumatic injuries requiring tissue transfer begins with aggressive resuscitation and stabilization. Systematic advances in acute casualty care at the point of injury have improved survival and allowed for increasingly complex treatment before definitive reconstruction at tertiary medical facilities outside the combat zone. As a result, the complexity of the limb salvage algorithm has increased over 14 years of combat activities in Iraq and Afghanistan. Problem: Severe poly-extremity trauma in combat casualties has led to a large number of extremity salvage cases. Advanced reconstructive techniques coupled with regenerative medicine applications have played a critical role in the restoration, recovery, and rehabilitation of functional limb salvage. Translational Relevance: The past 14 years of war trauma have increased our understanding of tissue transfer for extremity reconstruction in the treatment of combat casualties. Injury patterns, flap choice, and reconstruction timing are critical variables to consider for optimal outcomes. Clinical Relevance: Subacute reconstruction with specifically chosen flap tissue and donor site location based on individual injuries result in successful tissue transfer, even in critically injured patients. These considerations can be combined with regenerative therapies to optimize massive wound coverage and limb salvage form and function in previously active patients. Summary: Traditional soft tissue reconstruction is integral in the treatment of war extremity trauma. Pedicle and free flaps are a critically important part of the reconstructive ladder for salvaging extreme extremity injuries that are seen as a result of the current practice of war. PMID:27679751
77 FR 36031 - Agency Information Collection Activities: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... monthly Social Security benefits to qualified World War II veterans residing outside the United States. An... Reduction Act of 1995, effective October 1, 1995. This notice includes revisions of OMB-approved information..., 2012. Individuals can obtain copies of the OMB clearance packages by writing to [email protected] . 1...
Military-Induced Family Separation: A Stress Reduction Intervention.
ERIC Educational Resources Information Center
Black, William G., Jr.
1993-01-01
Notes that Persian Gulf War focused public attention on the problems military families face in coping with military-induced family separation. Highlights some of the unique stressors faced by active-duty, national guard, and reserve military families. Presents practical guidelines to assist social workers in designing interventions to help these…
Toward a Dependable Peace: A Proposal for an Appropriate Security System.
ERIC Educational Resources Information Center
Johansen, Robert C.
This booklet proposes that citizens and governments think imaginatively about national and international security and take action for comprehensive arms reductions. The document is presented in eight chapters. Chapter I reports that global insecurity exists despite continuous arms control negotiations since World War II. Chapter II discusses…
20 CFR 61.102 - Disposition of reimbursement requests.
Code of Federal Regulations, 2010 CFR
2010-04-01
... STATES CLAIMS FOR COMPENSATION UNDER THE WAR HAZARDS COMPENSATION ACT, AS AMENDED Reimbursement of...' Compensation to the disallowance or reduction of a claim within 60 days of the Office's decision. A carrier outside the United States has six months within which to file objections with the Associate Director. The...
Symbolic integration of a class of algebraic functions. [by an algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
An algorithm is presented for the symbolic integration of a class of algebraic functions. This class consists of functions made up of rational expressions of an integration variable x and square roots of polynomials, trigonometric and hyperbolic functions of x. The algorithm is shown to consist of the following components:(1) the reduction of input integrands to conical form; (2) intermediate internal representations of integrals; (3) classification of outputs; and (4) reduction and simplification of outputs to well-known functions.
Saltzman, Leia Y; Solomyak, Levi; Pat-Horenczyk, Ruth
2017-06-01
This paper reviews recent literature on the mental health needs of youth in the context of war and terrorism. A human rights lens is used to explore issues of accessibility and sustainability in service utilization during times of crisis. The authors present the evolution of services over the last several decades, progressing through individual, school-based, and community-wide interventions by exploring models that focus on symptom reduction and building resilience. This paper highlights the benefits and limitations of traditional intervention methods and proposes a new frontier of intervention development and research. The authors focus on the emerging field of e-mental health services and specifically highlight the utility of virtual reality games in treating trauma-exposed youth. The rapid and easily accessible nature of e-mental health models is presented as one potential solution to barriers in accessibility that can help promote the human rights of youth exposed to war and terrorism.
Positive psychology and war: an oxymoron.
Phipps, Sean
2011-10-01
The author was deeply disturbed by the January 2011 issue of the American Psychologist, which engendered a series of emotions in the author: first dismay, then anger, and finally a sense of shame about the current state of the profession. This was ostensibly an exposition of "positive psychology" principles and how they are to be applied in a colossal experiment designed to support our military in their fight against the ideology of jihadist Islam. The author found it hard to see what was positive in the presentation. Not one of the authors in this special issue discussed applying positive psychology principles to the reduction of conflict between nations, to the prevention of war, or to the promotion of peace. How about a positive psychology that questions the wisdom of leaders who tell us that the use of force is unavoidable, and seeks instead to help them find alternative, peaceful solutions? A true positive psychology should be primarily addressed to eradicating the disease of war, not to supporting those who fight it.
Reducing the Volume of NASA Earth-Science Data
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre
2010-01-01
A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
The Dostoevsky Machine in Georgetown: scientific translation in the Cold War.
Gordin, Michael D
2016-04-01
Machine Translation (MT) is now ubiquitous in discussions of translation. The roots of this phenomenon - first publicly unveiled in the so-called 'Georgetown-IBM Experiment' on 9 January 1954 - displayed not only the technological utopianism still associated with dreams of a universal computer translator, but was deeply enmeshed in the political pressures of the Cold War and a dominating conception of scientific writing as both the goal of machine translation as well as its method. Machine translation was created, in part, as a solution to a perceived crisis sparked by the massive expansion of Soviet science. Scientific prose was also perceived as linguistically simpler, and so served as the model for how to turn a language into a series of algorithms. This paper follows the rise of the Georgetown program - the largest single program in the world - from 1954 to the (as it turns out, temporary) collapse of MT in 1964.
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
Evacuation and deprivation: the wartime experience of the Devon and Exeter City Mental Hospitals.
Pearce, David
2011-09-01
In Exeter, the need for space to treat casualties in World War II led to a significant reduction in capacity at one psychiatric hospital and the closure of another. In spite of this, inpatient stays were longer than in peacetime, partly due to relatives who had to weigh up the advantages and disadvantages of having their unwell kin returned to them. In the latter years of the war, admissions from the Devon catchment area were higher than in peacetime. Having more patients who stayed longer was largely compensated for by utilizing free space as opposed to reducing admissions, leading to overcrowding and a restricted inpatient regime.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.
Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou
2016-01-01
For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.
NASA Technical Reports Server (NTRS)
Pan, Jianqiang
1992-01-01
Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.
Haley, Robert W; Charuvastra, Elizabeth; Shell, William E; Buhner, David M; Marshall, W Wesley; Biggs, Melanie M; Hopkins, Steve C; Wolfe, Gil I; Vernino, Steven
2013-02-01
The authors of prior small studies raised the hypothesis that symptoms in veterans of the 1991 Gulf War, such as chronic diarrhea, dizziness, fatigue, and sexual dysfunction, are due to cholinergic autonomic dysfunction. To perform a confirmatory test of this prestated hypothesis in a larger, representative sample of Gulf War veterans. Nested case-control study. Clinical and Translational Research Center, University of Texas Southwestern Medical Center, Dallas. Representative samples of Gulf War veterans meeting a validated case definition of Gulf War illness with 3 variants (called syndromes 1-3) and a control group, all selected randomly from the US Military Health Survey. Validated domain scales from the Autonomic Symptom Profile questionnaire, the Composite Autonomic Severity Score, and high-frequency heart rate variability from a 24-hour electrocardiogram. The Autonomic Symptom Profile scales were significantly elevated in all 3 syndrome groups (P< .001), primarily due to elevation of the orthostatic intolerance, secretomotor, upper gastrointestinal dysmotility, sleep dysfunction, urinary, and autonomic diarrhea symptom domains. The Composite Autonomic Severity Score was also higher in the 3 syndrome groups (P= .045), especially in syndrome 2, primarily due to a significant reduction in sudomotor function as measured by the Quantitative Sudomotor Axon Reflex Test, most significantly in the foot; the score was intermediate in the ankle and upper leg and was nonsignificant in the arm, indicating a peripheral nerve length-related deficit. The normal increase in high-frequency heart rate variability at night was absent or blunted in all 3 syndrome groups (P< .001). Autonomic symptoms are associated with objective, predominantly cholinergic autonomic deficits in the population of Gulf War veterans.
Ambavane, Apoorva; Lindahl, Bertil; Giannitsis, Evangelos; Roiz, Julie; Mendivil, Joan; Frankenstein, Lutz; Body, Richard; Christ, Michael; Bingisser, Roland; Alquezar, Aitor; Mueller, Christian
2017-01-01
The 1-hour (h) algorithm triages patients presenting with suspected acute myocardial infarction (AMI) to the emergency department (ED) towards "rule-out," "rule-in," or "observation," depending on baseline and 1-h levels of high-sensitivity cardiac troponin (hs-cTn). The economic consequences of applying the accelerated 1-h algorithm are unknown. We performed a post-hoc economic analysis in a large, diagnostic, multicenter study of hs-cTnT using central adjudication of the final diagnosis by two independent cardiologists. Length of stay (LoS), resource utilization (RU), and predicted diagnostic accuracy of the 1-h algorithm compared to standard of care (SoC) in the ED were estimated. The ED LoS, RU, and accuracy of the 1-h algorithm was compared to that achieved by the SoC at ED discharge. Expert opinion was sought to characterize clinical implementation of the 1-h algorithm, which required blood draws at ED presentation and 1h, after which "rule-in" patients were transferred for coronary angiography, "rule-out" patients underwent outpatient stress testing, and "observation" patients received SoC. Unit costs were for the United Kingdom, Switzerland, and Germany. The sensitivity and specificity for the 1-h algorithm were 87% and 96%, respectively, compared to 69% and 98% for SoC. The mean ED LoS for the 1-h algorithm was 4.3h-it was 6.5h for SoC, which is a reduction of 33%. The 1-h algorithm was associated with reductions in RU, driven largely by the shorter LoS in the ED for patients with a diagnosis other than AMI. The estimated total costs per patient were £2,480 for the 1-h algorithm compared to £4,561 for SoC, a reduction of up to 46%. The analysis shows that the use of 1-h algorithm is associated with reduction in overall AMI diagnostic costs, provided it is carefully implemented in clinical practice. These results need to be prospectively validated in the future.
Yeterian, Julie D; Berke, Danielle S; Litz, Brett T
2017-10-01
Posttraumatic stress disorder (PTSD) from warzone exposure is associated with chronic and disabling social and occupational problems. However, functional impairment is rarely assessed or targeted directly in PTSD treatments, which instead focus on symptom reduction. Trauma-related contributors to diminished functioning, including guilt, shame, and anger resulting from morally compromising or loss-based war experiences, are also underemphasized. The goal of this clinical trial is to fill a substantial gap in the treatment of military-related PTSD by testing a modified Adaptive Disclosure (AD) therapy for war-related PTSD stemming from moral injury and traumatic loss focused on improving psychosocial functioning AD. This paper describes the rationale and design of a multi-site randomized controlled trial comparing AD to Present-Centered Therapy (PCT). We will recruit 186 veterans with PTSD, who will be assessed at baseline, post-treatment, and 3- and 6-months post-treatment. Primary outcomes are functional changes (i.e., functioning/disability and quality of life). Secondary outcomes are mental health variables (i.e., PTSD, depression, guilt, shame). We hypothesize that veterans treated with AD will experience greater improvements in all outcomes compared to those treated with PCT. This trial will advance knowledge in rehabilitation research by testing the first therapy specifically designed to address psychosocial functioning among veterans with war-related PTSD. The results may improve the quality of mental health care for veterans by offering an ecologically sound treatment for experiences that are uniquely impactful for war veterans. Published by Elsevier Inc.
Mandić-Gajić, Gordana
2016-07-01
War veterans with chronic post-traumatic stress disorder (PTSD) have poorer family and parenting functioning, but little research has focused on these impairments. This paper presented how the series of drawings and the group art therapy process enhanced bridging the psychological barriers of a 33-year-old male PTSD war veteran to engagement with the child. After two years of deployment he returned home and suffered mostly from PTSD numbness and avoidance symptoms. The veteran had the family readjustment difficulties and felt guilty for being detached from his 3-year-old son. He under-went integrative treatment in the Day Unit Program. The drawings series were made by free associations. Clinical observations and group discussions were recorded in the group art therapy protocols. The presented patient got gratifications and support from the group members for his illustration of popular cartoon heroes, and decided to draw Mickey Mouse at home. On the next session he shared his satisfaction for bridging the gap between him and his son, having done the same drawings with his son at home. Beck's depression inventory (BDI) was used for self-rating of depression and a reduction of BDI score from 18 to 6 during the treatment course was recorded. Series of drawings illustrated shift from war related past toward current family life of the war veteran. Group art therapy gave him gratification and support with hope and a sense of belonging, thus facilitated his parenting readjustment.
Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young
2015-04-01
In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons
NASA Astrophysics Data System (ADS)
Fang, Le-Heng; Lin, Wei; Luo, Qiang
In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.
2013-09-01
sequence dataset. All procedures were performed by personnel in the IIMT UT Southwestern Genomics and Microarray Core using standard protocols. More... sequencing run, samples were demultiplexed using standard algorithms in the Genomics and Microarray Core and processed into individual sample Illumina single... Sequencing (RNA-Seq), using Illumina’s multiplexing mRNA-Seq to generate full sequence libraries from the poly-A tailed RNA to a read depth of 30
2015-08-01
Congress concerning requirements for the National Defense Stockpile (NDS) of strategic and critical non- fuel materials. 1 RAMF-SM, which was...critical non- fuel materials. The NDS was established in the World War II era and has been managed by the Department of Defense (DOD) since 1988. By...Department of the Interior. An alternative algorithm is used for materials with intensive defense demands. v Contents 1 . Introduction
Research on numerical method for multiple pollution source discharge and optimal reduction program
NASA Astrophysics Data System (ADS)
Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin
2018-03-01
In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.
Zakirova, Zuchra; Tweed, Miles; Crynen, Gogce; Reed, Jon; Abdullah, Laila; Nissanka, Nadee; Mullan, Myles; Mullan, Michael J; Mathura, Venkatarajan; Crawford, Fiona; Ait-Ghezala, Ghania
2015-01-01
Gulf War Illness (GWI) is a chronic multisymptom illness with a central nervous system component such as memory deficits, neurological, and musculoskeletal problems. There are ample data that demonstrate that exposure to Gulf War (GW) agents, such as pyridostigmine bromide (PB) and pesticides such as permethrin (PER), were key contributors to the etiology of GWI post deployment to the Persian GW. In the current study, we examined the consequences of acute (10 days) exposure to PB and PER in C57BL6 mice. Learning and memory tests were performed at 18 days and at 5 months post-exposure. We investigated the relationship between the cognitive phenotype and neuropathological changes at short and long-term time points post-exposure. No cognitive deficits were observed at the short-term time point, and only minor neuropathological changes were detected. However, cognitive deficits emerged at the later time point and were associated with increased astrogliosis and reduction of synaptophysin staining in the hippocampi and cerebral cortices of exposed mice, 5 months post exposure. In summary, our findings in this mouse model of GW agent exposure are consistent with some GWI symptom manifestations, including delayed onset of symptoms and CNS disturbances observed in GWI veterans.
Zakirova, Zuchra; Tweed, Miles; Crynen, Gogce; Reed, Jon; Abdullah, Laila; Nissanka, Nadee; Mullan, Myles; Mullan, Michael J.; Mathura, Venkatarajan; Crawford, Fiona; Ait-Ghezala, Ghania
2015-01-01
Gulf War Illness (GWI) is a chronic multisymptom illness with a central nervous system component such as memory deficits, neurological, and musculoskeletal problems. There are ample data that demonstrate that exposure to Gulf War (GW) agents, such as pyridostigmine bromide (PB) and pesticides such as permethrin (PER), were key contributors to the etiology of GWI post deployment to the Persian GW. In the current study, we examined the consequences of acute (10 days) exposure to PB and PER in C57BL6 mice. Learning and memory tests were performed at 18 days and at 5 months post-exposure. We investigated the relationship between the cognitive phenotype and neuropathological changes at short and long-term time points post-exposure. No cognitive deficits were observed at the short-term time point, and only minor neuropathological changes were detected. However, cognitive deficits emerged at the later time point and were associated with increased astrogliosis and reduction of synaptophysin staining in the hippocampi and cerebral cortices of exposed mice, 5 months post exposure. In summary, our findings in this mouse model of GW agent exposure are consistent with some GWI symptom manifestations, including delayed onset of symptoms and CNS disturbances observed in GWI veterans. PMID:25785457
Angell-Andersen, E; Tretli, S; Bjerknes, R; Forsén, T; Sørensen, T I A; Eriksson, J G; Räsänen, L; Grotmol, T
2004-01-01
The purpose of the study was to examine the height and weight in Nordic children during the years around World War II (WWII), and compare them with the nutritional situation during the same period. Information on food consumption and energy intake were obtained from the literature. Anthropometric data were collected from the Nordic capitals and cover the period from 1930 to 1960 for ages 7-13 years. The greatest energy restriction took place in Norway (20%), followed by Finland (17%), while Sweden and Denmark had a restriction of 4-7% compared to pre-war levels. The most pronounced effect of WWII on height and weight is seen in Norwegian children, while some effect is observed for the youngest children in Finland. Little or no effect is seen in Sweden and Denmark. The Nordic children were affected by WWII in terms of a transient reduction in temporal trends in height and weight, and the magnitude of this decrease was associated with the severity of the energy restriction prevailing in the respective country during the war. These findings warrant further studies of the chronic diseases associated with height and weight for cohorts being in their growth periods during WWII. Copyright 2004 Taylor and Francis Ltd.
Impact of the Gulf war on congenital heart diseases in Kuwait.
Abushaban, L; Al-Hay, A; Uthaman, B; Salama, A; Selvan, J
2004-02-01
There has been concern over the increase in the number of babies born with congenital heart diseases (CHD) in Kuwait after the Gulf War. We evaluated retrospectively the number of Kuwaiti infants who were diagnosed to have CHD within the first year of life. The comparison was made between those presented from January 1986 to December 1989 (preinvasion) and those presented after the liberation of Kuwait (from January 1992 to December 2000). The number of cases was considered per 10,000 live births in that year. The numbers of cases were 2704 (326 before the invasion and 2378 after liberation). The mean annual incidence of CHD was 39.5 and 103.4 (per 10,000 live births) before and after the Gulf War, respectively (P<0.001). There was an increase in the number of babies with CHD during the immediate 3 years postliberation with a relative reduction in the trend from 1995 to 2000, in some types of CHD. In our series, there was an increased incidence of CHD almost immediately following the end of the Gulf War period. The cause of this increase remains relatively obscure. Environmental pollution may be a contributing factor; others such as possible psychological trauma remain subject to speculation.
Global Famine after a Regional Nuclear War
NASA Astrophysics Data System (ADS)
Robock, A.; Xia, L.; Mills, M. J.; Stenke, A.; Helfand, I.
2014-12-01
A regional nuclear war between India and Pakistan, using 100 15-kt atomic bombs, could inject 5 Tg of soot into the upper troposphere from fires started in urban and industrial areas. Simulations by three different general circulation models, GISS ModelE, WACCM, and SOCOL, all agree that global surface temperature would decrease by 1 to 2°C for 5 to 10 years, and have major impacts on precipitation and solar radiation reaching Earth's surface. Local summer climate changes over land would be larger. Using the DSSAT crop simulation model forced by these three global climate model simulations, we investigate the impacts on agricultural production in China, the largest grain producer in the world. In the first year after the regional nuclear war, a cooler, drier, and darker environment would reduce annual rice production by 23 Mt (24%), maize production by 41 Mt (23%), and wheat production by 23 Mt (50%). This reduction of food availability would continue, with gradually decreasing amplitude, for more than a decade. Results from simulations in other major grain producing regions produce similar results. Thus a nuclear war using much less than 1% of the current global arsenal could produce a global food crisis and put a billion people at risk of famine.
Social Class and Elite University Education: A Bourdieusian Analysis
ERIC Educational Resources Information Center
Martin, Nathan Douglas
2010-01-01
The United States experienced a tremendous expansion of higher education after the Second World War. However, this expansion has not led to a substantial reduction to class inequalities at elite universities, where the admissions process is growing even more selective. In his classic studies of French education and society, Pierre Bourdieu…
The Efficacy of Foreign Assistance in Counter Narcotics
2013-03-01
Crop Reduction Components)...................................................................................25 Table 6. Colombian Coffee Prices in U.S...Colombia was initiated by the Clinton administration to assist the Colombian government with counter-narcotics, governing capacity, and economic...Assistance, Sustainable Development, and the War on Terrorism (Washington, D.C.: Environmental Law Institute, 2002) 8; Jean-Paul Azam and Veronique
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
System identification and model reduction using modulating function techniques
NASA Technical Reports Server (NTRS)
Shen, Yan
1993-01-01
Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.
A hybrid algorithm for speckle noise reduction of ultrasound images.
Singh, Karamjeet; Ranade, Sukhjeet Kaur; Singh, Chandan
2017-09-01
Medical images are contaminated by multiplicative speckle noise which significantly reduce the contrast of ultrasound images and creates a negative effect on various image interpretation tasks. In this paper, we proposed a hybrid denoising approach which collaborate the both local and nonlocal information in an efficient manner. The proposed hybrid algorithm consist of three stages in which at first stage the use of local statistics in the form of guided filter is used to reduce the effect of speckle noise initially. Then, an improved speckle reducing bilateral filter (SRBF) is developed to further reduce the speckle noise from the medical images. Finally, to reconstruct the diffused edges we have used the efficient post-processing technique which jointly considered the advantages of both bilateral and nonlocal mean (NLM) filter for the attenuation of speckle noise efficiently. The performance of proposed hybrid algorithm is evaluated on synthetic, simulated and real ultrasound images. The experiments conducted on various test images demonstrate that our proposed hybrid approach outperforms the various traditional speckle reduction approaches included recently proposed NLM and optimized Bayesian-based NLM. The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction. The denoising and edge preserving capability of hybrid algorithm is far better than existing traditional and recently proposed speckle reduction (SR) filters. The success of proposed algorithm would help in building the lay foundation for inventing the hybrid algorithms for denoising of ultrasound images. Copyright © 2017 Elsevier B.V. All rights reserved.
Biased normalized cuts for target detection in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.
2016-05-01
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.
2013-04-01
To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gongzhang, R.; Xiao, B.; Lardner, T.
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Cleary, Robert K; Pomerantz, Richard A; Lampman, Richard M
2006-08-01
This study was designed to develop treatment algorithms for colon, rectal, and anal injuries based on the review of relevant literature. Information was obtained through a MEDLINE ( www.nobi.nih.gov/entrez/query.fcgi ) search, and additional references were obtained through cross-referencing key articles cited in these papers. A total of 203 articles were considered relevant. The management of penetrating and blunt colon, rectal, and anal injuries has evolved during the past 150 years. Since the World War II mandate to divert penetrating colon injuries, primary repair or resection and anastomosis have found an increasing role in patients with nondestructive injuries. A critical review of recent literature better defines the role of primary repair and fecal diversion for these injuries and allows for better algorithms for the management of these injuries.
Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.
Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel
2016-10-01
This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
1994-06-15
that this meeting was noteworthy in that this "thaw" in the cold war was extended to our ocean optics community in a very productive way. It was indeed...light is attenuated significantly in productive waters over fairly short distances. The Doss-Wells instrument which used the Zaneveld-Wells algorithm...these global questions include agriculture, mariculture and fisheries production , deforestation, desertification, ozone depletion, air and water
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Gulf war illness--better, worse, or just the same? A cohort study.
Hotopf, Matthew; David, Anthony S; Hull, Lisa; Nikalaou, Vasilis; Unwin, Catherine; Wessely, Simon
2003-12-13
Firstly, to describe changes in the health of Gulf war veterans studied in a previous occupational cohort study and to compare outcome with comparable non-deployed military personnel. Secondly, to determine whether differences in prevalence between Gulf veterans and controls at follow up can be explained by greater persistence or greater incidence of disorders. Occupational cohort study in the form of a postal survey. Military personnel who served in the 1991 Persian Gulf war; personnel who served on peacekeeping duties to Bosnia; military personnel who were deployed elsewhere ("Era" controls). All participants had responded to a previous survey. United Kingdom. Self reported fatigue measured on the Chalder fatigue scale; psychological distress measured on the general health questionnaire, physical functioning and health perception on the SF-36; and a count of physical symptoms. Gulf war veterans experienced a modest reduction in prevalence of fatigue (48.8% at stage 1, 43.4% at stage 2) and psychological distress (40.0% stage 1, 37.1% stage 2) but a slight worsening of physical functioning on the SF-36 (90.3 stage 1, 88.7 stage 2). Compared with the other cohorts Gulf veterans continued to experience poorer health on all outcomes, although physical functioning also declined in Bosnia veterans. Era controls showed both lower incidence of fatigue than Gulf veterans, and both comparison groups showed less persistence of fatigue compared with Gulf veterans. Gulf war veterans remain a group with many symptoms of ill health. The excess of illness at follow up is explained by both higher incidence and greater persistence of symptoms.
Subcortical brain atrophy in Gulf War Illness.
Christova, Peka; James, Lisa M; Engdahl, Brian E; Lewis, Scott M; Carpenter, Adam F; Georgopoulos, Apostolos P
2017-09-01
Gulf War Illness (GWI) is a multisystem disorder that has affected a substantial number of veterans who served in the 1990-1991 Gulf War. The brain is prominently affected, as manifested by the presence of neurological, cognitive and mood symptoms. Although brain dysfunction in GWI has been well documented (EBioMedicine 12:127-32, 2016), abnormalities in brain structure have been debated. Here we report a substantial (~10%) subcortical brain atrophy in GWI comprising mainly the brainstem, cerebellum and thalamus, and, to a lesser extent, basal ganglia, amygdala and diencephalon. The highest atrophy was observed in the brainstem, followed by left cerebellum and right thalamus, then by right cerebellum and left thalamus. These findings indicate graded atrophy of regions anatomically connected through the brainstem via the crossed superior cerebellar peduncle (left cerebellum → right thalamus, right cerebellum → left thalamus). This distribution of atrophy, together with the observed systematic reduction in volume of other subcortical areas (basal ganglia, amygdala and diencephalon), resemble the distribution of atrophy seen in toxic encephalopathy (Am J Neuroradiol 13:747-760, 1992) caused by a variety of substances, including organic solvents. Given the potential exposure of Gulf War veterans to "a wide range of biological and chemical agents including sand, smoke from oil-well fires, paints, solvents, insecticides, petroleum fuels and their combustion products, organophosphate nerve agents, pyridostigmine bromide, …" (Institute of Medicine National Research Council. Gulf War and Health: Volume 1. Depleted uranium, pyridostigmine bromide, sarin, and vaccines. National Academies Press, Washington DC, 2000), it is reasonable to suppose that such exposures, alone or in combination, could underlie the subcortical atrophy observed.
TOL, WIETSE A.; KOMPROE, IVAN H.; JORDANS, MARK J.D.; VALLIPURAM, ANAVARATHAN; SIPSMA, HEATHER; SIVAYOKAN, SAMBASIVAMOORTHY; MACY, ROBERT D.; DE JONG, JOOP T.
2012-01-01
We aimed to examine outcomes, moderators and mediators of a preventive school-based mental health intervention implemented by paraprofessionals in a war-affected setting in northern Sri Lanka. A cluster randomized trial was employed. Subsequent to screening 1,370 children in randomly selected schools, 399 children were assigned to an intervention (n=199) or waitlist control condition (n=200). The intervention consisted of 15 manualized sessions over 5 weeks of cognitive behavioral techniques and creative expressive elements. Assessments took place before, 1 week after, and 3 months after the intervention. Primary outcomes included post-traumatic stress disorder (PTSD), depressive, and anxiety symptoms. No main effects on primary outcomes were identified. A main effect in favor of intervention for conduct problems was observed. This effect was stronger for younger children. Furthermore, we found intervention benefits for specific subgroups. Stronger effects were found for boys with regard to PTSD and anxiety symptoms, and for younger children on pro-social behavior. Moreover, we found stronger intervention effects on PTSD, anxiety, and function impairment for children experiencing lower levels of current war-related stressors. Girls in the intervention condition showed smaller reductions on PTSD symptoms than waitlisted girls. We conclude that preventive school-based psychosocial interventions in volatile areas characterized by ongoing war-related stressors may effectively improve indicators of psychological wellbeing and posttraumatic stress-related symptoms in some children. However, they may undermine natural recovery for others. Further research is necessary to examine how gender, age and current war-related experiences contribute to differential intervention effects. PMID:22654944
Operational Reserve: National Guard Readiness when Current Conflicts End
2010-03-01
toothpaste back in the tube”17 With probable post war reduction in DOD funding, it is not realistic to assume that the National Guard will obtain...necessitates that we don’t try to put the toothpaste back in the tube. We cannot undo the policies and procedures that have gotten us to the current state
The Soviet School System during Nazi Occupation (1941-1944)
ERIC Educational Resources Information Center
Krinko, Evgeny Fedorovich
2016-01-01
The article explores Soviet schooling in the occupied territory of the USSR during the Great Patriotic War. The author considers such issues as the reduction in the number of schools, changes in curricular content, and problems in the organization of schooling and the work of teachers. The article notes the effects of various factors on the…
Discovering Social Inequality: Dutch Educational Research in the Post-War Era
ERIC Educational Resources Information Center
Bakker, Nelleke; Amsing, Hilda T. A.
2012-01-01
Between the 1940s and 1960s across Western Europe a spirit of reform along comprehensive lines manifested itself in secondary education, aiming at a reduction of the existing social inequality of educational chances. These reforms are said to be rooted in new policies and in new approaches in educational studies. This article explores the…
Image-classification-based global dimming algorithm for LED backlights in LCDs
NASA Astrophysics Data System (ADS)
Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv
2015-07-01
Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.
Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M
2017-01-01
Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572
An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation
NASA Astrophysics Data System (ADS)
McLean, N. M.; Bowring, J. F.; Bowring, S. A.
2011-06-01
High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography.
Treiber, O; Wanninger, F; Führ, H; Panzer, W; Regulla, D; Winkler, G
2003-02-21
This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing. a dose reduction by 25% has no serious influence on the detection results. whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.
An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography
NASA Astrophysics Data System (ADS)
Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.
2003-02-01
This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daun, G.; Lenke, H.; Knackmuss, H.J.
1998-07-01
The explosive 2,4,6-trinitrotoluene (TNT), found as a major contaminant at armament plants from the two world wars, is reduced by a variety of microorganisms when electron donors such as glucose are added. This study shows that the cometabolic reduction of TNT to 2,4,6-triaminotoluene by an undefined anaerobic consortium increased considerably with increasing TNT concentrations and decreased with decreasing concentrations and feeding rates of glucose. The interactions of TNT and its reduction products with montmorillonitic clay and humic acids were investigated in abiotic adsorption experiments and during the microbial reduction of TNT. The results indicate that reduction products of TNT particularlymore » hydroxylaminodinitrotoluenes and 2,4,6-triaminotoluene bind irreversibly to soil components, which would prevent or prolong mineralization of the contaminants. Irreversible binding also hinders a further spread of the contaminants through soil or leaching into the groundwater.« less
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.
Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A
2018-05-02
The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.
A rate-constrained fast full-search algorithm based on block sum pyramid.
Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom
2005-03-01
This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.
NASA Astrophysics Data System (ADS)
Abdulle, Abdinur; Tan, Adhwa Amir; Pradhan, Biswajeet; Abdullahi, Saleh
2016-06-01
The aim of this study is to analyse land use and cover changes for the studied area during 1992-2015 and particularly evaluate the effect of civil war on these changes. Three Landsat images were used; Landsat 4 (1992), Landsat 7 (2000) and Landsat 8 (2015). Assessment of changes has been applied through three supervised classification algorithms, support vector machine, minimum classifier, and mahalanobis classifier. The result shows that SVM is providing highest overall accuracy of 98.5% for the years 2000 and 2015 with kappa coefficient of 0.9803 in year 2015. The change detection result show that the higher changes is between year 1992-2000 where vegetation land cover has dropped down to 11.1% and undeveloped area has increased to 11.4%. Whereas for year 2000-2015, higher changes belongs to build up area by 3.30% while undeveloped area and vegetation land cover keep decreasing by 2.64% and 1.93% respectively.
Scalable NIC-based reduction on large-scale clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.; Fernández, J. C.; Petrini, F.
2003-01-01
Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scalemore » clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library« less
Nuclear war in the Middle East: where is the voice of medicine and public health.
Dallas, Cham E; Burkle, Frederick M
2011-10-01
Once again, the politically volatile Middle East and accompanying rhetoric has escalated the risk of a major nuclear exchange. Diplomatic efforts have failed to make the medical consequences of such an exchange a leading element in negotiations. The medical and academic communities share this denial. Without exaggeration, the harsh reality of the enormous consequences of an imminently conceivable nuclear war between Iran and Israel will encompass an unprecedented millions of dead and an unavoidable decline in public health and environmental devastation that would impact major populations in the Middle East for decades to come. Nuclear deterrence and the uncomfortable but real medical and public health consequences must become an integral part of a broader global health diplomacy that emphasizes health security along with poverty reduction and good governance.
Conboy, Lisa; Gerke, Travis; Hsu, Kai-Yin; St John, Meredith; Goldstein, Marc; Schnyer, Rosa
2016-01-01
Gulf War Illness is a Complex Medical Illness characterized by multiple symptoms, including fatigue, sleep and mood disturbances, cognitive dysfunction, and musculoskeletal pain affecting veterans of the first Gulf War. No standard of care treatment exists. This pragmatic Randomized Clinical Trial tested the effects of individualized acupuncture treatments offered in extant acupuncture practices in the community; practitioners had at least 5 years of experience plus additional training provided by the study. Veterans with diagnosed symptoms of Gulf War Illness were randomized to either six months of biweekly acupuncture treatments (group 1, n = 52) or 2 months of waitlist followed by weekly acupuncture treatments (group 2, n = 52). Measurements were taken at baseline, 2, 4 and 6 months. The primary outcome is the SF-36 physical component scale score (SF-36P) and the secondary outcome is the McGill Pain scale. Of the 104 subjects who underwent randomization, 85 completed the protocol (82%). A clinically and statistically significant average improvement of 9.4 points (p = 0.03) in the SF-36P was observed for group 1 at month 6 compared to group 2, adjusting for baseline pain. The secondary outcome of McGill pain index produced similar results; at 6 months, group 1 was estimated to experience a reduction of approximately 3.6 points (p = 0.04) compared to group 2. Individualized acupuncture treatment of sufficient dose appears to offer significant relief of physical disability and pain for veterans with Gulf War Illness. This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Gulf War Illness Research Program under Award No. W81XWH-09-2-0064. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the Department of Defense. ClinicalTrials.gov NCT01305811.
Saile, Regina; Neuner, Frank; Ertl, Verena; Catani, Claudia
2013-06-01
Violence against women that is perpetrated by an intimate partner prevails as one of the most widespread human rights violations in virtually all societies of the world. Women in resource-poor countries, in particular those affected by recent war, appear to be at high risk of experiencing partner violence. Although there has been a longstanding assumption that organised violence at a societal level is transmitted to an interpersonal level, little is known about the link between exposure to war and familial violence. We conducted an epidemiological survey in 2010 with 2nd-grade students and their male and female guardians from nine heavily war-affected communities in Northern Uganda employing structured interviews and standardized questionnaires. The present study analysed a subsample of 235 guardian couples from seven rural communities in order to determine the prevalence and predictors of current partner violence experienced by women in the context of the past war. Study results revealed a high prevalence of ongoing partner violence experienced by female partners. In the past year, 80% of women reported at least one type of verbal/psychological abuse, 71% were exposed to at least one type of physical abuse, 52% suffered isolation and 23% fell victim to sexual violence. Findings from linear regression analyses showed that women's prior exposure to war-related traumatic events, women's re-experiencing symptoms and men's level of alcohol-related problems were associated with higher levels of partner violence against women. Differential effects of the predictor variables emerged with respect to different subtypes of partner violence. The findings suggest that partner violence against women constitutes a major problem in rural Northern Uganda. Programmes for the prevention and reduction of partner violence against women need to address high levels of hazardous drinking in men as well as women's prior traumatisation. In addition, different patterns of partner violence should be taken into account. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nuclear weapons modernizations
NASA Astrophysics Data System (ADS)
Kristensen, Hans M.
2014-05-01
This article reviews the nuclear weapons modernization programs underway in the world's nine nuclear weapons states. It concludes that despite significant reductions in overall weapons inventories since the end of the Cold War, the pace of reductions is slowing - four of the nuclear weapons states are even increasing their arsenals, and all the nuclear weapons states are busy modernizing their remaining arsenals in what appears to be a dynamic and counterproductive nuclear competition. The author questions whether perpetual modernization combined with no specific plan for the elimination of nuclear weapons is consistent with the nuclear Non-Proliferation Treaty and concludes that new limits on nuclear modernizations are needed.
Survey of abuses against injecting drug users in Indonesia
Davis, Sara LM; Triwahyuono, Agus; Alexander, Risa
2009-01-01
In Indonesia, an ongoing government "war on drugs" has resulted in numerous arrests and anecdotal reports of abuse in detention, but to date there has been little documentation or analysis of this issue. JANGKAR (also known in English as the Indonesian Harm Reduction Network), a nongovernmental organization (NGO) based in Jakarta, surveyed 1106 injecting drug users in 13 cities about their experiences of police abuse. Of those interviewed, 667 or 60% reported physical abuse by police. These findings indicate the importance of continuing efforts to promote police reform and harm reduction in Indonesia. PMID:19852845
Meyer, Michael G.; Hayenga, Jon; Neumann, Thomas; Katdare, Rahul; Presley, Chris; Steinhauer, David; Bell, Timothy; Lancaster, Christy; Nelson, Alan C.
2015-01-01
The war against cancer has yielded important advances in the early diagnosis and treatment of certain cancer types, but the poor detection rate and 5-year survival rate for lung cancer remains little changed over the past 40 years. Early detection through emerging lung cancer screening programs promises the most reliable means of improving mortality. Sputum cytology has been tried without success because sputum contains few malignant cells that are difficult for cytologists to detect. However, research has shown that sputum contains diagnostic malignant cells and could serve as a means of lung cancer detection if those cells could be detected and correctly characterized. Recently, the National Lung Cancer Screening Trial reported that screening by three consecutive low-dose X-ray CT scans provides a 20% reduction in lung cancer mortality compared to chest X-ray. This reduction in mortality, however, comes with an unacceptable false positive rate that increases patient risks and the overall cost of lung cancer screening. This article reviews the LuCED® test for detecting early lung cancer. LuCED is based on patient sputum that is enriched for bronchial epithelial cells. The enriched sample is then processed on the Cell-CT®, which images cells in three dimensions with sub-micron resolution. Algorithms are applied to the 3D cell images to extract morphometric features that drive a classifier to identify cells that have abnormal characteristics. The final status of these candidate abnormal cells is established by the pathologist's manual review. LuCED promotes accurate cell classification which could enable cost effective detection of lung cancer. PMID:26148817
Gold rush - A swarm dynamics in games
NASA Astrophysics Data System (ADS)
Zelinka, Ivan; Bukacek, Michal
2017-07-01
This paper is focused on swarm intelligence techniques and its practical use in computer games. The aim is to show how a swarm dynamics can be generated by multiplayer game, then recorded, analyzed and eventually controlled. In this paper we also discuss possibility to use swarm intelligence instead of game players. Based on our previous experiments two games, using swarm algorithms are mentioned briefly here. The first one is strategy game StarCraft: Brood War, and TicTacToe in which SOMA algorithm has also take a role of player against human player. Open research reported here has shown potential benefit of swarm computation in the field of strategy games and players strategy based on swarm behavior record and analysis. We propose new game called Gold Rush as an experimental environment for human or artificial swarm behavior and consequent analysis.
Chung, King
2012-01-01
The objectives of this study were: (1) to examine the effect of wide dynamic range compression (WDRC) and modulation-based noise reduction (NR) algorithms on wind noise levels at the hearing aid output; and (2) to derive effective strategies for clinicians and engineers to reduce wind noise in hearing aids. Three digital hearing aids were fitted to KEMAR. The noise output was recorded at flow velocities of 0, 4.5, 9.0, and 13.5 m/s in a wind tunnel as the KEMAR head was turned from 0° to 360°. Flow noise levels were compared between the 1:1 linear and 3:1 WDRC conditions, and between NR-activated and NR-deactivated conditions when the hearing aid was programmed to the directional and omnidirectional modes. The results showed that: (1) WDRC increased low-level noise and reduced high-level noise; and (2) different noise reduction algorithms provided different amounts of wind noise reduction in different microphone modes, frequency regions, flow velocities, and head angles. Wind noise can be reduced by decreasing the gain for low-level inputs, increasing the compression ratio for high-level inputs, and activating modulation-based noise reduction algorithms.
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... determined after reduction by any income, war profits, or excess profits taxes imposed on or with respect to... foreign corporation for foreign income taxes paid with respect to accumulated profits of taxable years of... of a foreign corporation for foreign income taxes paid with respect to accumulated profits of taxable...
Charting the Course of the Voyenno-Morskoy Flot: Soviet Naval Strategy towards the Year 2000
1991-05-13
December 1987, in Steven P. Adragna , "Doctrine and Strategy," Orbis, 33.2 (1989): 168. 37 requirements are a reduction in the si-e of the armed forces...Patriotic War 1941-45. Trans. U.S. Naval Institute. Moscow: Voyenizdat, 1973. Adragna , Stephen P. "A New Soviet Military? Doctrine and Strategy." Orbis 33.2
A Search for New Directions in the War Against Poverty. Staff Paper.
ERIC Educational Resources Information Center
Sheppard, Harold L.
Demographic surveys and data could be used to assess programs and policies directly and indirectly concerned with the reduction of poverty, and, through the use of such survey data, to point to a number of population subgroupings which are or are not moving out of poverty. Annually collected Census Bureau facts, the basis of much of the analysis…
A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.
Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T
2017-02-01
To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
SIOP for Perestroika. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szafranski, R.
1990-04-01
The pursuit of greater stability through arms reductions is an important component of perestroika. Assuming strategic weapons reductions, the general nuclear war plan, the Single Integrated Operational Plan (SIOP), will change to employ fewer nuclear arms. If stability and threat reduction are authentic goals, the composition of nuclear offensive forces and the SIOP alert force will evolve accordingly. Greater reliance will likely be placed on bombers. The United States and the Soviet Union can use the opportunity provided by perestroika to agree that the only legitimate role of nuclear weapons is to deter nuclear weapons by threatening nuclear reprisal ormore » punishment. Both sides can then share a strategic catechism that would allow them to move toward small reprisal forces.« less
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung
2018-03-01
When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.
Using Mathematics to Make Computing on Encrypted Data Secure and Practical
2015-12-01
LLL) lattice basis reduction algorithm, G-Lattice, Cryptography , Security, Gentry-Szydlo Algorithm, Ring-LWE 16. SECURITY CLASSIFICATION OF: 17...with symmetry be further developed, in order to quantify the security of lattice-based cryptography , including especially the security of homomorphic...the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography . The new algorithm of Lenstra and Silverberg
Simulated bi-SQUID Arrays Performing Direction Finding
2015-09-01
First, we applied the multiple signal classification ( MUSIC ) algorithm on linearly polarized signals. We included multiple signals in the output...both of the same frequency and different fre- quencies. Next, we explored a modified MUSIC algorithm called dimensionality reduction MUSIC (DR- MUSIC ... MUSIC algorithm is able to determine the AoA from the simulated SQUID data for linearly polarized signals. The MUSIC algorithm could accurately find
Parihar, Vipan K; Hattiangady, Bharathi; Shuai, Bing; Shetty, Ashok K
2013-01-01
Impairments in mood and cognitive function are the key brain abnormalities observed in Gulf war illness (GWI), a chronic multisymptom health problem afflicting ∼25% of veterans who served in the Persian Gulf War-1. Although the precise cause of GWI is still unknown, combined exposure to a nerve gas prophylaxis drug pyridostigmine bromide (PB) and pesticides DEET and permethrin during the war has been proposed as one of the foremost causes of GWI. We investigated the effect of 4 weeks of exposure to Gulf war illness-related (GWIR) chemicals in the absence or presence of mild stress on mood and cognitive function, dentate gyrus neurogenesis, and neurons, microglia, and astrocytes in the hippocampus. Combined exposure to low doses of GWIR chemicals PB, DEET, and permethrin induced depressive- and anxiety-like behavior and spatial learning and memory dysfunction. Application of mild stress in the period of exposure to chemicals exacerbated the extent of mood and cognitive dysfunction. Furthermore, these behavioral impairments were associated with reduced hippocampal volume and multiple cellular alterations such as chronic reductions in neural stem cell activity and neurogenesis, partial loss of principal neurons, and mild inflammation comprising sporadic occurrence of activated microglia and significant hypertrophy of astrocytes. The results show the first evidence of an association between mood and cognitive dysfunction and hippocampal pathology epitomized by decreased neurogenesis, partial loss of principal neurons, and mild inflammation in a model of GWI. Hence, treatment strategies that are efficacious for enhancing neurogenesis and suppressing inflammation may be helpful for alleviation of mood and cognitive dysfunction observed in GWI. PMID:23807240
Gulf war illness—better, worse, or just the same? A cohort study
Hotopf, Matthew; David, Anthony S; Hull, Lisa; Nikalaou, Vasilis; Unwin, Catherine; Wessely, Simon
2003-01-01
Objectives Firstly, to describe changes in the health of Gulf war veterans studied in a previous occupational cohort study and to compare outcome with comparable non-deployed military personnel. Secondly, to determine whether differences in prevalence between Gulf veterans and controls at follow up can be explained by greater persistence or greater incidence of disorders. Design Occupational cohort study in the form of a postal survey. Participants Military personnel who served in the 1991 Persian Gulf war; personnel who served on peacekeeping duties to Bosnia; military personnel who were deployed elsewhere (“Era” controls). All participants had responded to a previous survey. Setting United Kingdom. Main outcome measures Self reported fatigue measured on the Chalder fatigue scale; psychological distress measured on the general health questionnaire, physical functioning and health perception on the SF-36; and a count of physical symptoms. Results Gulf war veterans experienced a modest reduction in prevalence of fatigue (48.8% at stage 1, 43.4% at stage 2) and psychological distress (40.0% stage 1, 37.1% stage 2) but a slight worsening of physical functioning on the SF-36 (90.3 stage 1, 88.7 stage 2). Compared with the other cohorts Gulf veterans continued to experience poorer health on all outcomes, although physical functioning also declined in Bosnia veterans. Era controls showed both lower incidence of fatigue than Gulf veterans, and both comparison groups showed less persistence of fatigue compared with Gulf veterans. Conclusions Gulf war veterans remain a group with many symptoms of ill health. The excess of illness at follow up is explained by both higher incidence and greater persistence of symptoms. PMID:14670878
The SCUBA Data Reduction Pipeline: ORAC-DR at the JCMT
NASA Astrophysics Data System (ADS)
Jenness, Tim; Economou, Frossie
The ORAC data reduction pipeline, developed for UKIRT, has been designed to be a completely general approach to writing data reduction pipelines. This generality has enabled the JCMT to adapt the system for use with SCUBA with minimal development time using the existing SCUBA data reduction algorithms (Surf).
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?
Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence
2017-09-01
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
Ojo, Joseph O; Abdullah, Laila; Evans, James; Reed, Jon Mike; Montague, Hannah; Mullan, Michael J; Crawford, Fiona C
2014-04-01
Gulf War illness (GWI) is a currently untreatable multi-symptom disorder experienced by 1990-1991 Persian Gulf War (GW) veterans. The characteristic hallmarks of GWI include cognitive dysfunction, tremors, migraine, and psychological disturbances such as depression and anxiety. Meta-analyses of epidemiological studies have consistently linked these symptomatic profiles to the combined exposure of GW agents such as organophosphate-based and pyrethroid-based pesticides (e.g. chlorpyrifos (CPF) and permethrin (PER) respectively) and the prophylactic use of pyridostigmine bromide (PB) as a treatment against neurotoxins. Due to the multi-symptomatic presentation of this illness and the lack of available autopsy tissue from GWI patients, very little is currently known about the distinct early pathological profile implicated in GWI (including its influence on synaptic function and aspects of neurogenesis). In this study, we used preclinical models of GW agent exposure to investigate whether 6-month-old mice exposed to CPF alone, or a combined dose of CPF, PB and PER daily for 10 days, demonstrate any notable pathological changes in hippocampal, cortical (motor, piriform) or amygdalar morphometry. We report that at an acute post-exposure time point (after 3 days), both exposures resulted in the impairment of synaptic integrity (reducing synaptophysin levels) in the CA3 hippocampal region and altered neuronal differentiation in the dentate gyrus (DG), demonstrated by a significant reduction in doublecortin positive cells. Both exposures also significantly increased astrocytic GFAP immunoreactivity in the piriform cortex, motor cortex and the basolateral amygdala and this was accompanied by an increase in (basal) brain acetylcholine (ACh) levels. There was no evidence of microglial activation or structural deterioration of principal neurons in these regions following exposure to CPF alone or in combination with PB and PER. Evidence of subtle microvascular injury was demonstrated by the reduction of platelet endothelial cell adhesion molecule (PECAM)-1 levels in CPF+PB+PER exposed group compared to control. These data support early (subtle) neurotoxic effects on the brain following exposure to GW agents. © 2013 Japanese Society of Neuropathology.
User Instructions for the EPIC-2 Code.
1986-09-01
10 1 TAM IIFAILIDARAC EFAIL 5 MATERIAL CARDS FOR SOLIDS INPUT DATA L45,5X, FSO, A48. R(8FDO.OJ, MATL I WAR I iAIL "EFAILMAtEA :SCRIPT ION DENSITY SPH...failure of the elements must be achieved by the eroding interface algorithm, it is important that EFAIL (a mate- rial property) be much greater than ERODE...If left blank (DFRAC z 0) factor will be set to DFRAC = 1.0 EFAIL = Equivalent plastic strain (true) which, if exceeded, will totally fail the element
Reduction from cost-sensitive ordinal ranking to weighted binary classification.
Lin, Hsuan-Tien; Li, Ling
2012-05-01
We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Harm reduction in the US: a movement for change.
Greig, A
The War on Drugs in the United States has polarized the debate on how to deal effectively with drug use and prevention and makes it difficult to form an agenda to address the harm of drug use. Harm-reduction activists and drug-user groups need to establish common ground to develop programs acceptable to all parties. The harm-reduction approach is based on the premise that adverse consequences of a harmful act, drug use in this case, can be mitigated without necessarily reducing consumption. Needle exchange programs are a good example of this approach. There are 100 such programs in the United States, and the programs are seen as an effective means of reducing HIV transmission. However, the programs remain politically sensitive and Federal funding is outlawed. The War on Drugs programs can conflict with HIV prevention programs; programs that might reduce the incidence of HIV infection but do not criminalize or stigmatize drug use are rarely socially acceptable. In the U.S., about half of all new HIV cases can be attributed to drug use. One-third of the increase in prison populations since 1980 is a consequence of the number of drug-law violators in the prison system. The impact of moral conservatism and how the drug laws are affected by class, race, and gender are discussed. Groups involved with combating drug use and preventing HIV transmission will need to form alliances to develop programs mutually beneficial to their audiences.
AWARE - The Automated EUV Wave Analysis and REduction algorithm
NASA Astrophysics Data System (ADS)
Ireland, J.; Inglis; A. R.; Shih, A. Y.; Christe, S.; Mumford, S.; Hayes, L. A.; Thompson, B. J.
2016-10-01
Extreme ultraviolet (EUV) waves are large-scale propagating disturbances observed in the solar corona, frequently associated with coronal mass ejections and flares. Since their discovery over two hundred papers discussing their properties, causes and physics have been published. However, their fundamental nature and the physics of their interactions with other solar phenomena are still not understood. To further the understanding of EUV waves, and their relation to other solar phenomena, we have constructed the Automated Wave Analysis and REduction (AWARE) algorithm for the detection of EUV waves over the full Sun. The AWARE algorithm is based on a novel image processing approach to isolating the bright wavefront of the EUV as it propagates across the corona. AWARE detects the presence of a wavefront, and measures the distance, velocity and acceleration of that wavefront across the Sun. Results from AWARE are compared to results from other algorithms for some well known EUV wave events. Suggestions are also give for further refinements to the basic algorithm presented here.
Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph
2017-11-01
The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.
Naehle, Claas P; Hechelhammer, Lukas; Richter, Heiko; Ryffel, Fabian; Wildermuth, Simon; Weber, Johannes
To evaluate the effectiveness and clinical utility of a metal artifact reduction (MAR) image reconstruction algorithm for the reduction of high-attenuation object (HAO)-related image artifacts. Images were quantitatively evaluated for image noise (noiseSD and noiserange) and qualitatively for artifact severity, gray-white-matter delineation, and diagnostic confidence with conventional reconstruction and after applying a MAR algorithm. Metal artifact reduction reduces noiseSD and noiserange (median [interquartile range]) at the level of HAO in 1-cm distance compared with conventional reconstruction (noiseSD: 60.0 [71.4] vs 12.8 [16.1] and noiserange: 262.0 [236.8] vs 72.0 [28.3]; P < 0.0001). Artifact severity (reader 1 [mean ± SD]: 1.1 ± 0.6 vs 2.4 ± 0.5, reader 2: 0.8 ± 0.6 vs 2.0 ± 0.4) at level of HAO and diagnostic confidence (reader 1: 1.6 ± 0.7 vs 2.6 ± 0.5, reader 2: 1.0 ± 0.6 vs 2.3 ± 0.7) significantly improved with MAR (P < 0.0001). Metal artifact reduction did not affect gray-white-matter delineation. Metal artifact reduction effectively reduces image artifacts caused by HAO and significantly improves diagnostic confidence without worsening gray-white-matter delineation.
Ballistic Missile Defense Final Programmatic Environmental Impact Statement
1994-10-01
included: the need for BMD; budget allocations; procedural problems related to NEPA; nuclear weapon dangers; arms reductions; and potential contravention...2-26 2.6.2 TECHNOLOGY ALTERNATIVES ........................... 2-26 2.6.2.1 Directed Energy Weapons ..................... 2-26 2.6.2.2 Nuclear ...national defense strategy of mutually assured destruction to keep conflicts from escalating beyond conventional warfare to nuclear war. In 1955, the
Steering a Steady Course in the South China Sea
2017-10-27
including suggestions for reducing this burden to Washington Headquarters Service, Directorate for Information Operations and Reports, 1215 Jefferson Davis...Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Joint Military Operations Department Naval War College 686 Cushing
Demonstrating the Environmental & Economic Cost-Benefits of Reusing DoD’s Pre-World War II Buildings
2013-04-01
IV-1 Table IV-2: Summary Results PO1, NPV of Life Cycle Costs wirhout Factoring GHGs ......... IV...3 Table IV-3: Summary Results PO1, NPV of Life Cycle Costs with Monetized GHGs ............. IV-4 Table IV-4: Construction Cost Comparisons...IV-6 Table IV-6: Summary Results PO2, GHG Reductions in Metric Tons by Scope
1993-04-01
Management and Budget. Paperwork Reduction Project (0704-011). Washington. DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. .REPOR.DAT13. RPORT TYPE AND DATES...his support bas and will to fWgh Lasly, as pointed out, the Army stumbled onto the tactic of neutralizing the memy leadeship . Howevr, litnk qportance
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Malaria survey and malaria control detachments in the South-West Pacific Area in World War 2.
Crocker, Denton W
2009-01-01
Malaria among troops in the South-West Pacific Area (SWPA) in World War 2 affected the military effort to the degree that special units were formed to combat it. These malaria survey detachments (MSDs) and malaria control detachments (MCDs) were self-contained and so could move quickly to wherever their services were needed. In SWPA by 25 September 1944 there were 32 MSDs and 65 MCDs. Tables of organization called for 11 enlisted men in MSDs and MCDs, two officers in MSDs and one in MCDs. Detachments served throughout the SWPA. Detailed records of the 31st MSD show that in addition to antimalarial efforts it worked at control of scrub typhus, dengue and venereal disease, at reduction of rat populations and in experimental work involving DDT and schistosomiasis. Specific locations of the 31st MSD were New Guinea (3 sites), Morotai, Leyte, Mindoro, Okinawa and Japan. The detachment served overseas for 21 months. Experience in combating malaria in SWPA in World War 2 points to the need for better and continuous training of both medical and line officers in malaria prevention and control.
Psychological and socio-demographic data contributing to the resilience of holocaust survivors.
Fossion, Pierre; Leys, Christophe; Kempenaers, Chantal; Braun, Stéphanie; Verbanck, Paul; Linkowski, Paul
2014-01-01
The authors provide a within-group study of 65 Former Hidden Children (FHC; i.e., Jewish youths who spent World War II in various hideaway shelters across Nazi-occupied Europe) evaluated by the Hopkins Symptom Check List (HSCL), the Sense of Coherence Scale (SOCS), the Resilience Scale for Adults (RSA), and a socio-demographic questionnaire. The aim of the present article is to address the sensitization model of resilience (consisting in a reduction of resistance to additional stress due to previous exposure to trauma) and to identify the family, psychological, and socio-demographic characteristics that predict resilience among a group of FHC. The RSA score is negatively correlated with the number of post-war traumas and positively correlated with the SOCS score. FHC who have children present a higher RSA score than FHC who have no children. RSA global score negatively and significantly predicts HSCL score. In a global multivariate model, and in accordance with the sensitization model, the number of post-war traumas negatively predicts the RSA score. Moreover, the SOCS score and the number of children positively predict it. Therapeutic implications are discussed, limitations are considered, and further investigations are proposed.
Onyut, Lamaro P; Neuner, Frank; Schauer, Elisabeth; Ertl, Verena; Odenwald, Michael; Schauer, Maggie; Elbert, Thomas
2005-01-01
Background Little data exists on the effectiveness of psychological interventions for children with posttraumatic stress disorder (PTSD) that has resulted from exposure to war or conflict-related violence, especially in non-industrialized countries. We created and evaluated the efficacy of KIDNET, a child-friendly version of Narrative Exposure Therapy (NET), as a short-term treatment for children. Methods Six Somali children suffering from PTSD aged 12–17 years resident in a refugee settlement in Uganda were treated with four to six individual sessions of KIDNET by expert clinicians. Symptoms of PTSD and depression were assessed pre-treatment, post-treatment and at nine months follow-up using the CIDI Sections K and E. Results Important symptom reduction was evident immediately after treatment and treatment outcomes were sustained at the 9-month follow-up. All patients completed therapy, reported functioning gains and could be helped to reconstruct their traumatic experiences into a narrative with the use of illustrative material. Conclusions NET may be safe and effective to treat children with war related PTSD in the setting of refugee settlements in developing countries. PMID:15691374
Stepakoff, Shanee; Hubbard, Jon; Katoh, Maki; Falk, Erika; Mikulu, Jean-Baptiste; Nkhoma, Potiphar; Omagwa, Yuvenalis
2006-11-01
From 1999 to 2005, the Minneapolis-based Center for Victims of Torture (CVT) served Liberian and Sierra Leonean survivors of torture and war living in the refugee camps of Guinea. A psychosocial program was developed with 3 main goals: (a) to provide mental health care, (b) to train local refugee counselors, and (c) to raise community awareness about war trauma and mental health. Utilizing paraprofessional counselors under the close, on-site supervision of expatriate clinicians, the treatment model blended elements of Western and indigenous healing. The core component consisted of relationship-based supportive group counseling. Clinical interventions were guided by a 3-stage model of trauma recovery (safety, mourning, reconnection), which was adapted to the realities of the refugee camp setting. Over 4,000 clients were provided with counseling, and an additional 15,000 were provided with other supportive services. Results from follow-up assessments indicated significant reductions in trauma symptoms and increases in measures of daily functioning and social support during and after participation in groups. The treatment model developed in Guinea served as the basis for CVT's ongoing work with survivors in Sierra Leone and Liberia. ((c) 2006 APA, all rights reserved).
Emergence of competition and cooperation in an evolutionary resource war model
NASA Astrophysics Data System (ADS)
Lamantia, Fabio
2018-05-01
In this paper we introduce a simple punishment scheme in the 'great fish war' model with many players. An imitative process regulates how a coalition of cooperators is dynamically updated over time. An intuitive effect of adding sanctions is that they could enlarge the possible sustainable coalitions. However, the evolution toward full cooperation can be sustained by a punishment scheme provided that a critical mass of agents enforces cooperation at the beginning of the game. Moreover, we show the presence of thresholds in sanctions or in the cost for punishing such that if these thresholds are trespassed then dramatic reductions in the resource level and in the agents' welfare may occur as a consequence of free riding effects. We show by some examples that these phenomena are due to the presence of tipping points in the model.
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique
NASA Technical Reports Server (NTRS)
Barth, T. J.; Steger, J. L.
1985-01-01
An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.
Noise reduction and image enhancement using a hardware implementation of artificial neural networks
NASA Astrophysics Data System (ADS)
David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal
1999-03-01
In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.
2017-01-01
Collaborative beamforming (CBF) with a finite number of collaborating nodes (CNs) produces sidelobes that are highly dependent on the collaborating nodes’ locations. The sidelobes cause interference and affect the communication rate of unintended receivers located within the transmission range. Nulling is not possible in an open-loop CBF since the collaborating nodes are unable to receive feedback from the receivers. Hence, the overall sidelobe reduction is required to avoid interference in the directions of the unintended receivers. However, the impact of sidelobe reduction on the capacity improvement at the unintended receiver has never been reported in previous works. In this paper, the effect of peak sidelobe (PSL) reduction in CBF on the capacity of an unintended receiver is analyzed. Three meta-heuristic optimization methods are applied to perform PSL minimization, namely genetic algorithm (GA), particle swarm algorithm (PSO) and a simplified version of the PSO called the weightless swarm algorithm (WSA). An average reduction of 20 dB in PSL alongside 162% capacity improvement is achieved in the worst case scenario with the WSA optimization. It is discovered that the PSL minimization in the CBF provides capacity improvement at an unintended receiver only if the CBF cluster is small and dense. PMID:28464000
NASA Astrophysics Data System (ADS)
Laguda, Edcer Jerecho
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
On computation of Gröbner bases for linear difference systems
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.
2006-04-01
In this paper, we present an algorithm for computing Gröbner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Nuclear weapons modernizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristensen, Hans M.
This article reviews the nuclear weapons modernization programs underway in the world's nine nuclear weapons states. It concludes that despite significant reductions in overall weapons inventories since the end of the Cold War, the pace of reductions is slowing - four of the nuclear weapons states are even increasing their arsenals, and all the nuclear weapons states are busy modernizing their remaining arsenals in what appears to be a dynamic and counterproductive nuclear competition. The author questions whether perpetual modernization combined with no specific plan for the elimination of nuclear weapons is consistent with the nuclear Non-Proliferation Treaty and concludesmore » that new limits on nuclear modernizations are needed.« less
NASA Astrophysics Data System (ADS)
Morgan, Ashraf
The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
NASA Astrophysics Data System (ADS)
Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu
2018-06-01
An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.
Do All Roads Lead to Rome? ("or" Reductions for Dummy Travelers)
ERIC Educational Resources Information Center
Kilpelainen, Pekka
2010-01-01
Reduction is a central ingredient of computational thinking, and an important tool in algorithm design, in computability theory, and in complexity theory. Reduction has been recognized to be a difficult topic for students to learn. Previous studies on teaching reduction have concentrated on its use in special courses on the theory of computing. As…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, S; Yao, W
2015-06-15
Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scannedmore » by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.« less
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
On-the-fly reduction of open loops
NASA Astrophysics Data System (ADS)
Buccioni, Federico; Pozzorini, Stefano; Zoller, Max
2018-01-01
Building on the open-loop algorithm we introduce a new method for the automated construction of one-loop amplitudes and their reduction to scalar integrals. The key idea is that the factorisation of one-loop integrands in a product of loop segments makes it possible to perform various operations on-the-fly while constructing the integrand. Reducing the integrand on-the-fly, after each segment multiplication, the construction of loop diagrams and their reduction are unified in a single numerical recursion. In this way we entirely avoid objects with high tensor rank, thereby reducing the complexity of the calculations in a drastic way. Thanks to the on-the-fly approach, which is applied also to helicity summation and for the merging of different diagrams, the speed of the original open-loop algorithm can be further augmented in a very significant way. Moreover, addressing spurious singularities of the employed reduction identities by means of simple expansions in rank-two Gram determinants, we achieve a remarkably high level of numerical stability. These features of the new algorithm, which will be made publicly available in a forthcoming release of the OpenLoops program, are particularly attractive for NLO multi-leg and NNLO real-virtual calculations.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J
2017-05-01
To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Artifact reduction of different metallic implants in flat detector C-arm CT.
Hung, S-C; Wu, C-C; Lin, C-J; Guo, W-Y; Luo, C-B; Chang, F-C; Chang, C-Y
2014-07-01
Flat detector CT has been increasingly used as a follow-up examination after endovascular intervention. Metal artifact reduction has been successfully demonstrated in coil mass cases, but only in a small series. We attempted to objectively and subjectively evaluate the feasibility of metal artifact reduction with various metallic objects and coil lengths. We retrospectively reprocessed the flat detector CT data of 28 patients (15 men, 13 women; mean age, 55.6 years) after they underwent endovascular treatment (20 coiling ± stent placement, 6 liquid embolizers) or shunt drainage (n = 2) between January 2009 and November 2011 by using a metal artifact reduction correction algorithm. We measured CT value ranges and noise by using region-of-interest methods, and 2 experienced neuroradiologists rated the degrees of improved imaging quality and artifact reduction by comparing uncorrected and corrected images. After we applied the metal artifact reduction algorithm, the CT value ranges and the noise were substantially reduced (1815.3 ± 793.7 versus 231.7 ± 95.9 and 319.9 ± 136.6 versus 45.9 ± 14.0; both P < .001) regardless of the types of metallic objects and various sizes of coil masses. The rater study achieved an overall improvement of imaging quality and artifact reduction (85.7% and 78.6% of cases by 2 raters, respectively), with the greatest improvement in the coiling group, moderate improvement in the liquid embolizers, and the smallest improvement in ventricular shunting (overall agreement, 0.857). The metal artifact reduction algorithm substantially reduced artifacts and improved the objective image quality in every studied case. It also allowed improved diagnostic confidence in most cases. © 2014 by American Journal of Neuroradiology.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less
Electronic Energy Transfer in New Polymer Nanocomposite Assemblies
1994-07-13
for public release and sale; its distribution is unlimited. OL AISTfrRACT fMaimunt 20o war*) New light-harvesting thin film supramolecular assemblies...be supression or reduction of exciplex formation between excited donor molecules and ground state acceptor molecules that may lead to nonradiative...nonradiative excited state decay exists other than EET.33 One possibility for this nonradiative and non-EET pathway is exciplex formation between the
The Critical Capability: CORDS District Advisor Teams in Vietnam
2012-03-07
Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY...DISCLAIMER: THE OPINIONS AND CONCLUSIONS EXPRESSED HEREIN ARE THOSE OF THE INDIVIDUAL STUDENT AUTHOR AND DO NOT NECESSARILY...ultimately governed by simple guidance, “win the war, do good and avoid evil.”100 According to DSA Peter Tomsen, all of these [local] forces rose
2010-05-04
during the Vietnam Conflict. 67 David A. Kolb , Experiential Learning : Experience as the Source of Learning and Development. (Upper Saddle River, NJ...Essentials for Military Applications. Newport Paper #10. Newport: Newport War College Press. 1996. Kolb , David A. Experiential Learning : Experience... learning over analysis. A broad review of design theory suggests that four techniques - rapid prototyping, generative analysis, use of experts, and
AMERICA’S BASE NETWORK: CREDIBLE DETERRENCE
2017-04-06
reduction occurring after World War I.13 Both eras saw decades of instability and warfare replace the economic stability provided by the US and its global...is still in effect today and is based on shared interests in security and stability . The alliance was formally established via an Economic and...Abstract This essay looks at historical US basing strategy in order to understand the geopolitical and economic complexity facing diplomacy of future
U.S. National Security: A Selected Bibliography
2013-12-01
and Access Services Division, U.S. Army War College Library, by sending an e -mail message to USAWC.LibraryR@us.army.mil, or by phoning (717) 245...ProQuest May, Peter J., Ashley E . Jochim, and Joshua Sapotichne. "Constructing Homeland Security: An Anemic Policy Regime." Policy Studies Journal 39... Layton G., Jr. Reposturing the Force: Implications of Budget Reductions and Regional Rebalancing. Strategy Research Project. Carlisle Barracks: U.S
Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.
Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed
2018-03-01
Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A platform for evolving intelligently interactive adversaries.
Fogel, David B; Hays, Timothy J; Johnson, Douglas R
2006-07-01
Entertainment software developers face significant challenges in designing games with broad appeal. One of the challenges concerns creating nonplayer (computer-controlled) characters that can adapt their behavior in light of the current and prospective situation, possibly emulating human behaviors. This adaptation should be inherently novel, unrepeatable, yet within the bounds of realism. Evolutionary algorithms provide a suitable method for generating such behaviors. This paper provides background on the entertainment software industry, and details a prior and current effort to create a platform for evolving nonplayer characters with genetic and behavioral traits within a World War I combat flight simulator.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Final LDRD Report: Using Linkography of Cyber Attack Patterns to Inform Honeytoken Placement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Robert; Jarocki, John Charles; Fisher, Andrew N
The war to establish cyber supremacy continues, and the literature is crowded with strictly technical cyber security measures. We present the results of a three year LDRD project using Linkography, a methodology new to the field of cyber security, we establish the foundation neces- sary to track and profile the microbehavior of humans attacking cyber systems. We also propose ways to leverage this understanding to influence and deceive these attackers. We studied the sci- ence of linkography, applied it to the cyber security domain, implemented a software package to manage linkographs, generated the preprocessing blocks necessary to ingest raw data,more » produced machine learning models, created ontology refinement algorithms and prototyped a web applica- tion for researchers and practitioners to apply linkography. Machine learning produced some of our key results: We trained and validated multinomial classifiers with a real world data set and predicted the attacker's next category of action with 86 to 98% accuracy; dimension reduction techniques indicated that the linkography-based features were among the most powerful. We also discovered ontology refinement algorithms that advanced the state of the art in linkography in general and cyber security in particular. We conclude that linkography is a viable tool for cyber security; we look forward to expanding our work to other data sources and using our prediction results to enable adversary deception techniques. Acknowledgements Thanks to Phil Bennett, Michael Bernard, Jeffrey Bigg, Marshall Daniels, Tyler Dean, David Dug- gan, Carson Kent, Josh Maine, Marci McBride, Nick Peterson, Katie Rodhouse, Asael Sorenson, Roger Suppona, Scott Watson and David Zage. We acknowledge support for this work by the LDRD Program at Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory operated by Sandia Corporation for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This page intentionally left blank.« less
Metal artefact reduction with cone beam CT: an in vitro study
Bechara, BB; Moore, WS; McMahan, CA; Noujeim, M
2012-01-01
Background Metal in a patient's mouth has been shown to cause artefacts that can interfere with the diagnostic quality of cone beam CT. Recently, a manufacturer has made an algorithm and software available which reduces metal streak artefact (Picasso Master 3D® machine; Vatech, Hwaseong, Republic of Korea). Objectives The purpose of this investigation was to determine whether or not the metal artefact reduction algorithm was effective and enhanced the contrast-to-noise ratio. Methods A phantom was constructed incorporating three metallic beads and three epoxy resin-based bone substitutes to simulate bone next to metal. The phantom was placed in the centre of the field of view and at the periphery. 10 data sets were acquired at 50–90 kVp. The images obtained were analysed using a public domain software ImageJ (NIH Image, Bethesda, MD). Profile lines were used to evaluate grey level changes and area histograms were used to evaluate contrast. The contrast-to-noise ratio was calculated. Results The metal artefact reduction option reduced grey value variation and increased the contrast-to-noise ratio. The grey value varied least when the phantom was in the middle of the volume and the metal artefact reduction was activated. The image quality improved as the peak kilovoltage increased. Conclusion Better images of a phantom were obtained when the metal artefact reduction algorithm was used. PMID:22241878
Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number
NASA Astrophysics Data System (ADS)
Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo
Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Kellie, Jim; Kamara, Alimamy; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2015-12-01
To measure the benefits to household caregivers of a psychotherapeutic intervention for adolescents and young adults living in a war-affected area. Between July 2012 and July 2013, we carried out a randomized controlled trial of the Youth Readiness Intervention--a cognitive-behavioural intervention for war-affected young people who exhibit depressive and anxiety symptoms and conduct problems--in Freetown, Sierra Leone. Overall, 436 participants aged 15-24 years were randomized to receive the intervention (n = 222) or care as usual (n = 214). Household caregivers for the participants in the intervention arm (n = 101) or control arm (n = 103) were interviewed during a baseline survey and again, if available (n = 155), 12 weeks later in a follow-up survey. We used a burden assessment scale to evaluate the burden of care placed on caregivers in terms of emotional distress and functional impairment. The caregivers' mental health--i.e. internalizing, externalizing and prosocial behaviour--was evaluated using the Oxford Measure of Psychosocial Adjustment. Difference-in-differences multiple regression analyses were used, within an intention-to-treat framework, to estimate the treatment effects. Compared with the caregivers of participants of the control group, the caregivers of participants of the intervention group reported greater reductions in emotional distress (scale difference: 0.252; 95% confidence interval, CI: 0.026-0.4782) and greater improvements in prosocial behaviour (scale difference: 0.249; 95% CI: 0.012-0.486) between the two surveys. A psychotherapeutic intervention for war-affected young people can improve the mental health of their caregivers.
NASA Astrophysics Data System (ADS)
Pausata, Francesco S. R.; Lindvall, Jenny; Ekman, Annica M. L.; Svensson, Gunilla
2016-11-01
Here, we use a coupled atmospheric-ocean-aerosol model to investigate the plume development and climate effects of the smoke generated by fires following a regional nuclear war between emerging third-world nuclear powers. We simulate a standard scenario where 5 Tg of black carbon (BC) is emitted over 1 day in the upper troposphere-lower stratosphere. However, it is likely that the emissions from the fires ignited by bomb detonations include a substantial amount of particulate organic matter (POM) and that they last more than 1 day. We therefore test the sensitivity of the aerosol plume and climate system to the BC/POM ratio (1:3, 1:9) and to the emission length (1 day, 1 week, 1 month). We find that in general, an emission length of 1 month substantially reduces the cooling compared to the 1-day case, whereas taking into account POM emissions notably increases the cooling and the reduction of precipitation associated with the nuclear war during the first year following the detonation. Accounting for POM emissions increases the particle size in the short-emission-length scenarios (1 day/1 week), reducing the residence time of the injected particle. While the initial cooling is more intense when including POM emission, the long-lasting effects, while still large, may be less extreme compared to the BC-only case. Our study highlights that the emission altitude reached by the plume is sensitive to both the particle type emitted by the fires and the emission duration. Consequently, the climate effects of a nuclear war are strongly dependent on these parameters.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Iterative methods used in overlap astrometric reduction techniques do not always converge
NASA Astrophysics Data System (ADS)
Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.
1993-04-01
In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.
Gene Selection and Cancer Classification: A Rough Sets Based Approach
NASA Astrophysics Data System (ADS)
Sun, Lijun; Miao, Duoqian; Zhang, Hongyun
Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Coffey, Christanne; Serra, John; Goebel, Mat; Espinoza, Sarah; Castillo, Edward; Dunford, James
2018-05-03
A significant increase in false positive ST-elevation myocardial infarction (STEMI) electrocardiogram interpretations was noted after replacement of all of the City of San Diego's 110 monitor-defibrillator units with a new brand. These concerns were brought to the manufacturer and a revised interpretive algorithm was implemented. This study evaluated the effects of a revised interpretation algorithm to identify STEMI when used by San Diego paramedics. Data were reviewed 6 months before and 6 months after the introduction of a revised interpretation algorithm. True-positive and false-positive interpretations were identified. Factors contributing to an incorrect interpretation were assessed and patient demographics were collected. A total of 372 (234 preimplementation, 138 postimplementation) cases met inclusion criteria. There was a significant reduction in false positive STEMI (150 preimplementation, 40 postimplementation; p < 0.001) after implementation. The most common factors resulting in false positive before implementation were right bundle branch block, left bundle branch block, and atrial fibrillation. The new algorithm corrected for these misinterpretations with most postimplementation false positives attributed to benign early repolarization and poor data quality. Subsequent follow-up at 10 months showed maintenance of the observed reduction in false positives. This study shows that introducing a revised 12-lead interpretive algorithm resulted in a significant reduction in the number of false positive STEMI electrocardiogram interpretations in a large urban emergency medical services system. Rigorous testing and standardization of new interpretative software is recommended before introduction into a clinical setting to prevent issues resulting from inappropriate cardiac catheterization laboratory activations. Copyright © 2018 Elsevier Inc. All rights reserved.
Carbon monoxide mixing ratio inference from gas filter radiometer data
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.
1983-01-01
A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.
ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.
Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L
2011-08-01
In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.
Effects of secondary loudspeaker properties on broadband feedforward active duct noise control.
Chan, Yum-Ji; Huang, Lixi; Lam, James
2013-07-01
Dependence of the performance of feedforward active duct noise control on secondary loudspeaker parameters is investigated. Noise reduction performance can be improved if the force factor of the secondary loudspeaker is higher. For example, broadband noise reduction improvement up to 1.6 dB is predicted by increasing the force factor by 50%. In addition, a secondary loudspeaker with a larger force factor was found to have quicker convergence in the adaptive algorithm in experiment. In simulations, noise reduction is improved in using an adaptive algorithm by using a secondary loudspeaker with a heavier moving mass. It is predicted that an extra broadband noise reduction of more than 7 dB can be gained using an adaptive filter if the force factor, moving mass and coil inductance of a commercially available loudspeaker are doubled. Methods to increase the force factor beyond those of commercially available loudspeakers are proposed.
Material Life Cycle Analysis for the Reduction of Waste Generation at Military Installations
2017-02-01
avoid the fossil fuel consumption and land degradation associated with transporting those materials to a landfill. Eco-LCA can also be used to calcu...Recycling Program RMRC Recycled Materials Resource Center SAR Same As Report SDD Sustainable Design and Development SF Standard Form SME Subject...Agency WWII World War II REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information
From Fog to Friction: The Impact of Network-Enabled Command and Control on Operational Leadership
2012-05-04
Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC...decision-making of operational commanders, affecting their ability to manage the operational level of war. An increasing reliance on NEC2 has...picture (COP) provides the operational commander the ability to coordinate and manage a truly joint force. During OIF, ground forces under attack had
Leo Szilard Lectureship Award Talk: Nuclear disarmament after the cold war
NASA Astrophysics Data System (ADS)
Podvig, Pavel
2008-04-01
Now that the cold war is long over, our thinking of nuclear weapons and the role that they play in international security has undergone serious changes. The emphasis has shifted from superpower confrontation to nuclear proliferation, spread of weapon materials, and to the dangers of countries developing nuclear weapon capability under a cover of a civilian program. At the same time, the old cold-war dangers, while receded, have not disappeared completely. The United States and Russia keep maintaining thousands of nuclear weapons in their arsenals, some of them in very high degree of readiness. This situation presents a serious challenge that the international community has to deal with. Although Russia and the United States are taking some steps to reduce their nuclear arsenals, the traditional arms control process has stalled -- the last treaty that was signed in 2002 does not place serious limits on strategic forces of either side. The START Treaty, which provides a framework for verification and transparency in reduction of nuclear arsenals, will expire at the end of 2009. Little effort has been undertaken to extend the treaty or renegotiate it. Moreover, in recent years Russia has stepped up the efforts to modernize its strategic nuclear forces. The United States has resisted joining the Comprehensive Nuclear Test Ban Treaty and has been working on controversial new nuclear weapon development programs. The U.S. missile defense program makes the dialogue between Russia and the United States even more difficult. The reluctance of Russia and the United States to engage in a discussion about drastic reductions of their nuclear forces undermines the case of nuclear nonproliferation and seriously complicated their effort to contain the spread of nuclear weapon technologies and expertise. One of the reasons for the current lack of progress in nuclear disarmament is the contradiction between the diminished role that nuclear weapons play in security of nuclear weapon states and the inertia of cold-war institutions that are involved in their development and support. Dealing with this contradiction would require development of new mechanisms of cooperation between nuclear weapons states and their strong commitment to the cause of nuclear nonproliferation. One important area of cooperation is development of a framework that would prevent the spread of nuclear materials and technology at the time when increasing number of countries is turning toward expanded use of nuclear power to cover their energy needs.
Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering
NASA Astrophysics Data System (ADS)
Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech
2015-03-01
We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.
Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrighi, Bill
2016-03-03
libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.
Adaptive Trajectory Prediction Algorithm for Climbing Flights
NASA Technical Reports Server (NTRS)
Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz
2012-01-01
Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.
Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.
Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W
2018-01-01
Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.
Objective performance assessment of five computed tomography iterative reconstruction algorithms.
Omotayo, Azeez; Elbakri, Idris
2016-11-22
Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
1988-03-31
radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Mirzazadeh, Ali; Malekinejad, Mohsen; Kahn, James G
2015-03-01
Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of human immunodeficiency virus [HIV testing and counseling (HTC)]. We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, eg, consistent condom use) and undesirable (70.5%, eg, inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR and provide the conceptual basis for advantages of RRR over other metrics. Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underused by original and review studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Juhye; Nam, Haewon; Lee, Rena
2015-07-01
CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.
Print quality analysis for ink-saving algorithms
NASA Astrophysics Data System (ADS)
Ortiz Segovia, Maria V.; Bonnier, Nicolas; Allebach, Jan P.
2012-01-01
Ink-saving strategies for CMYK printers have evolved from their earlier stages where the 'draft' print mode was the main option available to control ink usage. The savings were achieved by printing alternate dots in an image at the expense of reducing print quality considerably. Nowadays, customers are not only unwilling to compromise quality but have higher expectations regarding both visual print quality and ink reduction solutions. Therefore, the need for more intricate ink-saving solutions with lower impact on print quality is evident. Printing-related factors such as the way the printer places the dots on the paper and the ink-substrate interaction play important and complex roles in the characterization and modeling of the printing process that make the ink reduction topic a challenging problem. In our study, we are interested in benchmarking ink-saving algorithms to find the connections between different ink reduction levels of a given ink-saving method and a set of print quality attributes. This study is mostly related to CMYK printers that use dispersed dot halftoning algorithms. The results of our efforts to develop such an evaluation scheme are presented in this paper.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Development and evaluation of thermal model reduction algorithms for spacecraft
NASA Astrophysics Data System (ADS)
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Mobilizing for a war on the home front against sugar-related morbidity and mortality.
Schillinger, Dean; Kahn, James G
2017-01-01
In Israel today, there are 420,200 Israelis diagnosed with diabetes, and every year, Israelis sustain thousands of diabetes-related deaths and tens of thousands of diabetes-related amputations. As such, in Israel, as in much of the world, there is a silent and deadly public health war against obesity and diabetes taking place on the home front -- one in which clinicians, patients, and families fight thousands of life- and limb-threatening battles daily, involving preventable heart disease, diabetes, strokes and amputations. Yet the global clinical and scientific communities, indeed society at large, have barely begun to mobilize. Fighting this war requires confronting and altering "obesogenic" and "diabetogenic" economic and social factors, including food and beverage marketing and pricing that push diets engorged with processed sugars. Ginsberg, in a study recently published in IJHPR, contributes to our understanding of the combined sugar-related health burdens in Israel, producing an epidemiology and health economics study that estimates the health burdens of obesity, overweight, and dental caries in Israel today. He projects the reductions resulting from that portion of disease burden and associated costs if sugar consumption declined to 10 or 5% of daily caloric consumption as a result of multifaceted public health interventions. Projected over 70 years, these reductions in sugar consumption would prevent 16,590 and 34,580 deaths, respectively. These numbers of Israeli deaths averted are similar to, or exceed, the total resulting from armed conflict or terrorism over the past 70 years. While overconsumption of sugar is only one of many factors that drive cardio-metabolic disease, the study by Ginsberg suggests a path through which we can overcome the numerous internal and external obstacles that societies face in making a public policy commitment to fight the warm on the home front: promoting health by reducing added sugar exposure.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Je; Yoon, Hyun; Im, Piljae
This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less
Lee, Je; Yoon, Hyun; Im, Piljae; ...
2017-12-27
This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less
Homology search with binary and trinary scoring matrices.
Smith, Scott F
2006-01-01
Protein homology search can be accelerated with the use of bit-parallel algorithms in conjunction with constraints on the values contained in the scoring matrices. Trinary scoring matrices (containing only the values -1, 0, and 1) allow for significant acceleration without significant reduction in the receiver operating characteristic (ROC) score of a Smith-Waterman search. Binary scoring matrices (containing the values 0 and 1) result in some reduction in ROC score, but result in even more acceleration. Binary scoring matrices and five-bit saturating scores can be used for fast prefilters to the Smith-Waterman algorithm.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
Parallel-vector unsymmetric Eigen-Solver on high performance computers
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Jiangning, Qin
1993-01-01
The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.
Changing Minds in the Army: Why It Is So Difficult and What to Do About It
2013-10-01
concepts—personality, cognitive dissonance reduction, the hardwiring of the brain , the imprints of early career events, and senior leader intuition...way to raise children, the manners expected when talking to a superior in the workplace, or even the role of air- power in war. Changing one’s mind...students selected for brigade command score even lower than the overall USAWC average.14 This raises an interest- ing paradox: The leaders
2014-03-01
the war. Even the Procuraduría de los Derechos Humanos (PDH, or Human Rights Ombudsman), which was established in 1985, was infested with racist...Procuraduría de los Derechos Humanos (PDH)145 The PDH’s purpose was to document human rights abuses in order to promote their reduction and eventual...conditions. In addition to the Madres, the Asamblea Permanente por los Derechos 61 Fagen
2008-01-01
with many private sector companies to manufacture, field , and develop the products it acquires. As mentioned, the percentages of work outsourced ...been involved from the conceptual development all the way to operational testing and fielding of every major weapons system our Marines and Sailors...the ability to collaborate with contractors and assess the defense value of private sector technologi- cal developments . The inherently governmental
Fraction Reduction through Continued Fractions
ERIC Educational Resources Information Center
Carley, Holly
2011-01-01
This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.
Sayer, Nina A; Noorbaloochi, Siamak; Frazier, Patricia A; Pennebaker, James W; Orazem, Robert J; Schnurr, Paula P; Murdoch, Maureen; Carlson, Kathleen F; Gravely, Amy; Litz, Brett T
2015-10-01
We examined the efficacy of a brief, accessible, nonstigmatizing online intervention-writing expressively about transitioning to civilian life. U.S. Afghanistan and Iraq war veterans with self-reported reintegration difficulty (N = 1,292, 39.3% female, M = 36.87, SD = 9.78 years) were randomly assigned to expressive writing (n = 508), factual control writing (n = 507), or no writing (n = 277). Using intention to treat, generalized linear mixed models demonstrated that 6-months postintervention, veterans who wrote expressively experienced greater reductions in physical complaints, anger, and distress compared with veterans who wrote factually (ds = 0.13 to 0.20; ps < .05) and greater reductions in PTSD symptoms, distress, anger, physical complaints, and reintegration difficulty compared with veterans who did not write at all (ds = 0.22 to 0.35; ps ≤ .001). Veterans who wrote expressively also experienced greater improvement in social support compared to those who did not write (d = 0.17). Relative to both control conditions, expressive writing did not lead to improved life satisfaction. Secondary analyses also found beneficial effects of expressive writing on clinically significant distress, PTSD screening, and employment status. Online expressive writing holds promise for improving health and functioning among veterans experiencing reintegration difficulty, albeit with small effect sizes. Published 2015. This article is a US Government work and is in the public domain in the USA.
A New Method of Facial Expression Recognition Based on SPE Plus SVM
NASA Astrophysics Data System (ADS)
Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei
A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.
Comparative performance between compressed and uncompressed airborne imagery
NASA Astrophysics Data System (ADS)
Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh
2008-04-01
The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.
An opposite view data replacement approach for reducing artifacts due to metallic dental objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston
Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less
Gil, Sharon; Weinberg, Michael; Shamai, Michal; Ron, Pnina; Harel, Hila; Or-Chen, Keren
2016-01-01
In light of current modifications in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) diagnostic criteria for posttraumatic stress disorder (PTSD), this study aimed to revalidate well-known PTSD risk factors related to terrorism and war in Israel, namely, proximity to the Gaza Strip, dissociative symptoms, acute stress disorder (ASD) symptoms, and social support. One hundred and sixty Israeli civilians were assessed during the 2014 Israel-Hamas war at 2 time points: 1 week after the beginning of the operation (t1) and 1 month after initial evaluation (t2), using the DSM-5 PTSD Symptom Levels Scale (PSLS; Gil, Weinberg, Or-Chen, & Harel, 2015). A paired t test analysis showed significant reduction in the respondents' posttraumatic stress symptoms (PTSS) 1 month after the initial assessment point. A structural equation model (SEM) showed that higher ASD symptoms at t1 and higher dissociative symptoms at t2 increased the risk for PTSS at t2. Conversely, higher peritraumatic dissociation at t1 decreased the risk for PTSS at t2. Proximity to the Gaza Strip, and social support, failed to demonstrate significant association with PTSS at t2. DSM-5 PTSS 1 month after prolonged traumatic exposure are strongly associated with high ASD symptoms at 1 week as a risk factor; high levels of peritraumatic dissociation at 1 week as a protective factor; and high levels of dissociative symptoms at 1 month as a risk factor. Theoretically and clinically the findings of the study further suggest that ongoing massive terrorism and war cannot be viewed or treated as identical to other traumas. (c) 2016 APA, all rights reserved).
Identifying Priorities for Mental Health Interventions in War-Affected Youth: A Longitudinal Study.
Betancourt, Theresa S; Gilman, Stephen E; Brennan, Robert T; Zahn, Ista; VanderWeele, Tyler J
2015-08-01
War-affected youth often suffer from multiple co-occurring mental health problems. These youth often live in low-resource settings where it may be infeasible to provide mental health services that simultaneously address all of these co-occurring mental health issues. It is therefore important to identify the areas where targeted interventions would do the most good. This analysis uses observational data from 3 waves of a longitudinal study on mental health in a sample of 529 war-affected youth (24.2% female; ages 10-17 at T1, 2002) in Sierra Leone. We regressed 4 mental health outcomes at T3 (2008) on internalizing (depression/anxiety) and externalizing (hostility/aggression) problems and prosocial attitudes/behaviors and community variables at T2 (2004) controlling for demographics, war exposures, and previous mental health scores at T1, allowing us to assess the relative impact of potential mental health intervention targets in shaping mental health outcomes over time. Controlling for baseline covariates at T1 and all other exposures/potential intervention targets at T2, we observed a significant association between internalizing problems at T2 and 3 of the 4 outcomes at T3: internalizing (β = 0.27, 95% confidence interval [CI]: 0.11-0.42), prosocial attitudes (β = -0.20, 95% CI: -0.33 to -0.07) and posttraumatic stress symptoms (β = 0.22, 95% CI: 0.02-0.43). No other potential intervention target had similar substantial effects. Reductions in internalizing may have multiple benefits for other mental health outcomes at a later point in time, even after controlling for confounding variables. Copyright © 2015 by the American Academy of Pediatrics.
The American Home Front: Revolutionary War, Civil War, World War I, World War II
1983-01-01
ledgm ents ................................................... xvii WAR AND SOCIETY IN AMERICA: SOME QUESTIONS ..... I 1. THE AMERICAN REVOLUTION 5 The...Price of War ...................................... 6 A Revolutionary Society at War ............................ 8 The Revolutionary Economy...obilizing the Union for War .................................... 67 Civil War and American Society . ................................ 71 O rganizing the
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Data Reduction of Jittered Infrared Images Using the ORAC Pipeline
NASA Astrophysics Data System (ADS)
Currie, Malcolm; Wright, Gillian; Bridger, Alan; Economou, Frossie
We relate our experiences using the ORAC data reduction pipeline for jittered images of stars and galaxies. The reduction recipes currently combine applications from several Starlink packages with intelligent Perl recipes to cater to UKIRT data. We describe the recipes and some of the algorithms used, and compare the quality of the resultant mosaics and photometry with the existing facilities.
Dennehy, Ellen B; Suppes, Trisha; Rush, A John; Miller, Alexander L; Trivedi, Madhukar H; Crismon, M Lynn; Carmody, Thomas J; Kashner, T Michael
2005-12-01
Despite increasing adoption of clinical practice guidelines in psychiatry, there is little measurement of provider implementation of these recommendations, and the resulting impact on clinical outcomes. The current study describes one effort to measure these relationships in a cohort of public sector out-patients with bipolar disorder. Participants were enrolled in the algorithm intervention of the Texas Medication Algorithm Project (TMAP). Study methods and the adherence scoring algorithm have been described elsewhere. The current paper addresses the relationships between patient characteristics, provider experience with the algorithm, provider adherence, and clinical outcomes. Measurement of provider adherence includes evaluation of visit frequency, medication choice and dosing, and response to patient symptoms. An exploratory composite 'adherence by visit' score was developed for these analyses. A total of 1948 visits from 141 subjects were evaluated, and utilized a two-stage declining effects model. Providers with more experience using the algorithm tended to adhere less to treatment recommendations. Few patient factors significantly impacted provider adherence. Increased adherence to algorithm recommendations was associated with larger decreases in overall psychiatric symptoms and depressive symptoms over time, but did not impact either immediate or long-term reductions in manic symptoms. Greater provider adherence to treatment guideline recommendations was associated with greater reductions in depressive symptoms and overall psychiatric symptoms over time. Additional research is needed to refine measurement and to further clarify these relationships.
Aaby, Peter; Garly, May-Lill; Balé, Carlitos; Martins, Cesario; Jensen, Henrik; Lisse, Ida; Whittle, Hilton
2003-09-01
Previous studies have suggested that standard measles vaccine may reduce mortality by more than the number of deaths thought to be caused by measles infection in areas with high mortality. However, these observations have not been based on randomized trials. During the recent war in Guinea-Bissau, most children fled from the city of Bissau and immunization services in the country broke down for several months. We were performing a trial in which children were randomized at 6 months of age to receive either measles vaccine or inactivated polio vaccine. Because of the war many children did not receive the dose of measles vaccine planned for 9 months of age. We were able to monitor mortality during the war and after. Included in the study were 433 children 6 to 11 months of age. Fifteen children died (3.6%) during the first 3 months of the war before vaccination programs were resumed, 4 of 214 measles-vaccinated children and 11 of 219 children who had received inactivated polio vaccine. The effect of measles vaccine was marked for girls [mortality rate ratio (MR), 0.00; 95% confidence limits, 0.0 to 0.37], whereas there was no difference for boys (MR = 1.02; 95% confidence limits, 0.25 to 3.88). In a combined analysis controlling for factors that differed between the two groups, the MR for measles-vaccinated children was 0.30 (95% confidence limits, 0.08 to 0.87). Prolonging the period of observation to the end of 1998 or including the prewar period did not modify the significant beneficial effect of measles vaccine for girls. Twenty-two of the children in the cohort were reported to have had measles, 8 cases occurring during the 3 months of the war. Exclusion of measles cases in the analysis did not change the results; children who had received measles vaccine had a MR of 0.28 (95% confidence limits, 0.06 to 0.89) during the first 3 months of the war. Consistent with previous observational studies, measles vaccination was associated with a reduction in mortality that cannot be explained by the prevention of measles infection. This nonspecific beneficial effect was particularly strong for girls. Further studies are needed to examine the extent of nonspecific effects in settings with high mortality.
Cyber War Game in Temporal Networks
Cho, Jin-Hee; Gao, Jianxi
2016-01-01
In a cyber war game where a network is fully distributed and characterized by resource constraints and high dynamics, attackers or defenders often face a situation that may require optimal strategies to win the game with minimum effort. Given the system goal states of attackers and defenders, we study what strategies attackers or defenders can take to reach their respective system goal state (i.e., winning system state) with minimum resource consumption. However, due to the dynamics of a network caused by a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-complete. We propose two heuristic strategies in a greedy manner based on a node’s two characteristics: resource level and influence based on k-hop reachability. We analyze complexity and optimality of each algorithm compared to optimal solutions for a small-scale static network. Further, we conduct a comprehensive experimental study for a large-scale temporal network to investigate best strategies, given a different environmental setting of network temporality and density. We demonstrate the performance of each strategy under various scenarios of attacker/defender strategies in terms of win probability, resource consumption, and system vulnerability. PMID:26859840
García-Sancho, Miguel
2011-01-01
This paper explores the introduction of professional systems engineers and information management practices into the first centralized DNA sequence database, developed at the European Molecular Biology Laboratory (EMBL) during the 1980s. In so doing, it complements the literature on the emergence of an information discourse after World War II and its subsequent influence in biological research. By the careers of the database creators and the computer algorithms they designed, analyzing, from the mid-1960s onwards information in biology gradually shifted from a pervasive metaphor to be embodied in practices and professionals such as those incorporated at the EMBL. I then investigate the reception of these database professionals by the EMBL biological staff, which evolved from initial disregard to necessary collaboration as the relationship between DNA, genes, and proteins turned out to be more complex than expected. The trajectories of the database professionals at the EMBL suggest that the initial subject matter of the historiography of genomics should be the long-standing practices that emerged after World War II and to a large extent originated outside biomedicine and academia. Only after addressing these practices, historians may turn to their further disciplinary assemblage in fields such as bioinformatics or biotechnology.
Mirzazadeh, A; Malekinejad, M; Kahn, JG
2018-01-01
Objective Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). Study Design and Setting This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of HIV Testing and Counseling (HTC). We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Results Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, e.g., consistent condom use) and undesirable (70.5% e.g., inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR, and provide the conceptual basis for advantages of RRR over other metrics. Conclusion Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underutilized by original and review studies. PMID:25726522
Chung, King; Nelson, Lance; Teske, Melissa
2012-09-01
The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessor-eProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0° to 360°. Listeners' speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI + NR), and adaptive directional microphone plus noise reduction algorithm (ADM + NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADM+NR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio
1995-01-01
The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.
Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
NASA Astrophysics Data System (ADS)
Sembiring, Pasukat
2017-12-01
Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.
2018-05-01
This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.
Quantum speedup of the traveling-salesman problem for bounded-degree graphs
NASA Astrophysics Data System (ADS)
Moylett, Dominic J.; Linden, Noah; Montanaro, Ashley
2017-03-01
The traveling-salesman problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.
Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
SAT Encoding of Unification in EL
NASA Astrophysics Data System (ADS)
Baader, Franz; Morawska, Barbara
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area
NASA Technical Reports Server (NTRS)
Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.
1995-01-01
A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.
Cut set-based risk and reliability analysis for arbitrarily interconnected networks
Wyss, Gregory D.
2000-01-01
Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.
Knowledge-based scheduling of arrival aircraft
NASA Technical Reports Server (NTRS)
Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.
1995-01-01
A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.
Indian Soldiers Need Eye Protection.
Jha, Kirti Nath
2017-02-01
Combat-related eye injuries entail enormous financial, social and psychological cost. Military Combat Eye Protection (MCEP) decreases both the incidence and severity of eye injuries. Experts have recognised the need for MCEP for Indian soldiers. We aim to review the combat-related eye injuries and combat eye protection among the Indian soldiers. Global practices of MCEP are also reviewed. We also aim to offer our recommendations for Indian soldiers. We carried out Medline search for combat-related eye injuries and MCEP and separately searched for eye injuries among Indian soldiers during war and other operations. We present the findings as results. Recommendations are based on the opinions of the experts. Combat-related eye injuries increased from 3% of injured in the 1965 Indo-Pakistan War to 4.8% in 1971 war. During peace-keeping operations in Sri Lanka (1987-89) eye injuries increased to 10.5% of the injured. Statistics on eye injuries during counterinsurgency operations are not available. MCEP have shown reduction in eye injuries, and thus MCEP forms a part of personal equipment of the soldiers in developed countries. Indian soldiers do not have provision of MCEP. Combat-related eye injuries among Indian Army soldiers have been increasing. Data on eye injuries during counterinsurgency operations are not available. Indian soldiers do not have provision of MCEP. Provision of MCEP is therefore desirable. Awareness program among the commanders and the soldiers shall result in attitudinal changes and increased compliance.
Indian Soldiers Need Eye Protection
2017-01-01
Combat-related eye injuries entail enormous financial, social and psychological cost. Military Combat Eye Protection (MCEP) decreases both the incidence and severity of eye injuries. Experts have recognised the need for MCEP for Indian soldiers. We aim to review the combat-related eye injuries and combat eye protection among the Indian soldiers. Global practices of MCEP are also reviewed. We also aim to offer our recommendations for Indian soldiers. We carried out Medline search for combat-related eye injuries and MCEP and separately searched for eye injuries among Indian soldiers during war and other operations. We present the findings as results. Recommendations are based on the opinions of the experts. Combat-related eye injuries increased from 3% of injured in the 1965 Indo-Pakistan War to 4.8% in 1971 war. During peace-keeping operations in Sri Lanka (1987-89) eye injuries increased to 10.5% of the injured. Statistics on eye injuries during counterinsurgency operations are not available. MCEP have shown reduction in eye injuries, and thus MCEP forms a part of personal equipment of the soldiers in developed countries. Indian soldiers do not have provision of MCEP. Combat-related eye injuries among Indian Army soldiers have been increasing. Data on eye injuries during counterinsurgency operations are not available. Indian soldiers do not have provision of MCEP. Provision of MCEP is therefore desirable. Awareness program among the commanders and the soldiers shall result in attitudinal changes and increased compliance. PMID:28384904
2013-03-01
1204, Arlington, VA 22202–4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704–0188) Washington DC 20503. 1. AGENCY USE ...contemporary politics of Egypt, Pakistan, and Turkey, use a slightly modified version of Stepan’s 11 prerogatives. While dropping three of them, they...relations emerging democracies have to overcome. For example, this study may be useful to civilian and military elites in the United States, due to the
2002-05-01
some officers continued to search for better solutions. In 1994, Colonel Galen B. Jackman , the Director of the Infantry School’s Combined Arms and...to an analysis of the changing operational environment and the US Army’s current trend toward force reduction. Colonel Jackman believed that his...recommended organization would return to the infantry rifle squad the capability to conduct fire and maneuver. Colonel Jackman advocated smaller fire
1974-12-01
military expenditures • Attitude toward increased taxation or reduction in federal budget expenditures • Willingness to experience governmental controls...if allocation, rationing, wane and price con- trols, and increased taxation arc not resorted to, as was case in the early of the Vietnam War...n incor; \\ into thv malysi . • Financing through taxation would allow g< non-defense spending to continue unaltered. 39 ACDA/MEA-24 6
Counterinsurgency Scorecard: Afghanistan in Early 2013 Relative to Insurgencies Since World War II
2013-01-01
permissions.html). RAND OFFICES SANTA MONICA, CA • WASHINGTON, DC PITTSBURGH, PA • NEW ORLEANS, LA • JACKSON , MS • BOSTON, MA DOHA, QA...are always in the pack: tan - gible support reduction, commitment and motivation, and flexibility and adaptability. • Every insurgency is unique, but...win Guatemala 1960–1996 8 –4 4 COIN win Tibet 1956–1974 7 –3 4 COIN win Sri Lanka 1976–2009 6 –1 5 COIN win Mozambique (Mozambican National
2017-03-01
to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT...rose. The end of the Cold War along with the rise of Philippine nationalism, which culminated in the 1986 People Power Revolution to ouster Marcos...http://www.nytimes.com/1991/12/28/world/philippines-orders- us -to-leave- strategic-navy-base-at-subic-bay.html?pagewanted=all. 4 Frank Cibulka, “The
Winning the War on Drugs in Mexico? Toward an Integrated Approach to the Illegal Drug Trade
2009-12-01
the beginning of the twentieth century when prohibition of the opium trade started. Since then, the social harm of the illegal drug trade in all...its forms has been constantly increasing. Today, the most obvious example of the social harm of the illegal drug trade in Mexico is drug-related crime...reduction approach that has proved ineffective both in Mexico and around the world over the last century because it is not aimed at the social roots of
Historical Review of Astro-Geodetic Observations in Serbia
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Delcev, S.; Vasilic, V.; Gucevic, J.
2008-10-01
Astro-geodetic determinations of vertical deflections in Serbia began during the first years of 20th century. The first field works were led by S. Bo\\vsković. After the 2nd World War, Military Geographic Institute, Department of Geodesy from the Faculty of Civil Engineering, and Federal Geodetic Directorate continued the determinations, needed for reductions of terrestrial geodetic measurements and the astro-geodetic geoid determination. Last years improvements of the astro-geodetic methods are carried out in the area of implementing modern measurement equipment and technologies.
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
Abadir, Nadin; Schmidt, Maria; Laube, Guido F; Weitz, Marcus
2017-09-01
The objective of the study was the development of an abridged risk-stratified imaging algorithm for the management of children with unilateral ureteropelvic junction obstruction (UPJO). Data on timing, frequency and duration of diagnostic imaging in children with unilateral UPJO was extracted retrospectively. Based on these findings, an abridged imaging algorithm was developed without changing the intended management by the clinicians and the outcome of the individual patient. The potential reduction of imaging studies was analysed and stratified by risk and management groups. The reduction in imaging studies, seen for ultrasound (US) and functional imaging (FI), was 45% each. On average, this is equivalent to 3 US and 1 FI studies less for every patient within the study period. The change was more pronounced in the low-risk groups. Progression of UPJO never occurred after 2 years of age and all secondary surgeries were carried out until the age of 3. Although our findings need to be validated by further prospective research, the developed imaging algorithm represents a risk-stratified approach towards less imaging studies in children with unilateral UPJO, and a follow-up beyond 3 years of age should be considered only in selected cases at the discretion of the clinician. What is Known: • ultrasound and functional imaging represent an integral part of therapeutic decision-making in children with unilateral ureteropelvic junction obstruction • imaging studies cannot accurately assess which patients are in need of surgical intervention, therefore close, serial imaging is preferred What is New: • a new, risk-stratified imaging algorithm was developed for the first 3 years of life • applying this algorithm could lead to a considerable reduction of imaging studies, and also the associated risks and health-care costs.
Discrete-time model reduction in limited frequency ranges
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A mathematical formulation for model reduction of discrete time systems such that the reduced order model represents the system in a particular frequency range is discussed. The algorithm transforms the full order system into balanced coordinates using frequency weighted discrete controllability and observability grammians. In this form a criterion is derived to guide truncation of states based on their contribution to the frequency range of interest. Minimization of the criterion is accomplished without need for numerical optimization. Balancing requires the computation of discrete frequency weighted grammians. Close form solutions for the computation of frequency weighted grammians are developed. Numerical examples are discussed to demonstrate the algorithm.
NASA Astrophysics Data System (ADS)
Gutowski, Marek W.
1992-12-01
Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).
Mennecke, Angelika; Svergun, Stanislav; Scholz, Bernhard; Royalty, Kevin; Dörfler, Arnd; Struffert, Tobias
2017-01-01
Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. • After coiling subarachnoid haemorrhage, metal artefacts seriously reduce FD-CT image quality. • This new metal artefact reduction algorithm is feasible for flat-detector CT. • After coiling, MAR is necessary for diagnostic quality of affected slices. • Slice-wise Pearson correlation is introduced to evaluate improvement of MAR in future studies. • Metal-unaffected parts of image are not modified by this MAR algorithm.
DOT National Transportation Integrated Search
1976-09-01
Software used for the reduction and analysis of the multipath prober, modem evaluation (voice, digital data, and ranging), and antenna evaluation data acquired during the ATS-6 field test program is described. Multipath algorithms include reformattin...
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jian; Hamidouche, Khaled; Zheng, Jie
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less
NASA Technical Reports Server (NTRS)
Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.
1991-01-01
Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.
Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation
NASA Astrophysics Data System (ADS)
Medina, H.; Mutu, R.
2017-07-01
An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.
The Correlation Fractal Dimension of Complex Networks
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei
2013-05-01
The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.
Algorithm comparison for schedule optimization in MR fingerprinting.
Cohen, Ouri; Rosen, Matthew S
2017-09-01
In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.
Amone-P’Olak, Kennedy; Otim, Balaam Nyeko; Opio, George; Ovuga, Emilio; Meiser-Stedman, Richard
2014-01-01
Psychotic symptoms have been associated with post-traumatic stress disorder and war experiences. However, the relationships between types of war experiences, the onset and course of psychotic symptoms, and post-war hardships in child soldiers have not been investigated. This study assessed whether various types of war experiences contribute to psychotic symptoms differently and whether post-war hardships mediated the relationship between war experiences and later psychotic symptoms. In an ongoing longitudinal cohort study (the War-Affected Youths Survey), 539 (61% male) former child soldiers were assessed for psychotic symptoms, post-war hardships, and previous war experiences. Regression analyses were used to assess the contribution of different types of war experiences on psychotic symptoms and the mediating role of post-war hardships in the relations between previous war experiences and psychotic symptoms. The findings yielded ‘witnessing violence’, ‘deaths and bereavement’, ‘involvement in hostilities’, and ‘sexual abuse’ as types of war experiences that significantly and independently predict psychotic symptoms. Exposure to war experiences was related to psychotic symptoms through post-war hardships (β = .18, 95% confidence interval = [0.10, 0.25]) accounting for 50% of the variance in their relationship. The direct relation between previous war experiences and psychotic symptoms attenuated but remained significant (β = .18, 95% confidence interval = [0.12, 0.26]). Types of war experiences should be considered when evaluating risks for psychotic symptoms in the course of providing emergency humanitarian services in post-conflict settings. Interventions should consider post-war hardships as key determinants of psychotic symptoms among war-affected youths. PMID:24718435
Davies, M; Lavalle-González, F; Storms, F; Gomis, R
2008-05-01
For many patients with type 2 diabetes, oral antidiabetic agents (OADs) do not provide optimal glycaemic control, necessitating insulin therapy. Fear of hypoglycaemia is a major barrier to initiating insulin therapy. The AT.LANTUS study investigated optimal methods to initiate and maintain insulin glargine (LANTUS, glargine, Sanofi-aventis, Paris, France) therapy using two treatment algorithms. This subgroup analysis investigated the initiation of once-daily glargine therapy in patients suboptimally controlled on multiple OADs. This study was a 24-week, multinational (59 countries), multicenter (611), randomized study. Algorithm 1 was a clinic-driven titration and algorithm 2 was a patient-driven titration. Titration was based on target fasting blood glucose < or =100 mg/dl (< or =5.5 mmol/l). Algorithms were compared for incidence of severe hypoglycaemia [requiring assistance and blood glucose <50 mg/dl (<2.8 mmol/l)] and baseline to end-point change in haemoglobin A(1c) (HbA(1c)). Of the 4961 patients enrolled in the study, 865 were included in this subgroup analysis: 340 received glargine plus 1 OAD and 525 received glargine plus >1 OAD. Incidence of severe hypoglycaemia was <1%. HbA(1c) decreased significantly between baseline and end-point for patients receiving glargine plus 1 OAD (-1.4%, p < 0.001; algorithm 1 -1.3% vs. algorithm 2 -1.5%; p = 0.03) and glargine plus >1 OAD (-1.7%, p < 0.001; algorithm 1 -1.5% vs. algorithm 2 -1.8%; p = 0.001). This study shows that initiation of once-daily glargine with OADs results in significant reduction of HbA(1c) with a low risk of hypoglycaemia. The greater reduction in HbA(1c) was seen in patients randomized to the patient-driven algorithm (algorithm 2) on 1 or >1 OAD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dong Sik; Lee, Sanggyun
2013-06-15
Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less
Reduction of artifacts in computer simulation of breast Cooper's ligaments
NASA Astrophysics Data System (ADS)
Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm
ERIC Educational Resources Information Center
Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet
2017-01-01
Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…
NASA Astrophysics Data System (ADS)
Ćaǧatay Uçgun, Filiz; Esen, Oǧul; Gümral, Hasan
2018-01-01
We present Skinner-Rusk and Hamiltonian formalisms of second order degenerate Clément and Sarıoğlu-Tekin Lagrangians. The Dirac-Bergmann constraint algorithm is employed to obtain Hamiltonian realizations of Lagrangian theories. The Gotay-Nester-Hinds algorithm is used to investigate Skinner-Rusk formalisms of these systems.
NASA Astrophysics Data System (ADS)
Bai, Chen; Han, Dongjuan
2018-04-01
MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.
Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis
NASA Astrophysics Data System (ADS)
Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan
2017-10-01
This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.
Matched filter based detection of floating mines in IR spacetime
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Lapierre, Fabian; Philips, Wilfried; Acheroy, Marc
2009-09-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging,search-and-rescue and perimeter or harbour defense. IR video was chosen for its day-and-night imaging capability, and its availability on military vessels. Detection is difficult because a rough sea is seen as a dynamic background of moving objects with size order, shape and temperature similar to those of the floating mine. We do find a determinant characteristic in the target's periodic motion, which differs from that of the propagating surface waves composing the background. The classical detection and tracking approaches give bad results when applied to this problem. While background detection algorithms assume a quasi-static background, the sea surface is actually very dynamic, causing this category of algorithms to fail. Kalman or particle filter algorithms on the other hand, which stress temporal coherence, suffer from tracking loss due to occlusions and the great noise level of the image. We propose an innovative approach. This approach uses the periodicity of the objects movement and thus its temporal coherence. The principle is to consider the video data as a spacetime volume similar to a hyperspectral data cube by replacing the spectral axis with a temporal axis. We can then apply algorithms developed for hyperspectral detection problems to the detection of small floating objects. We treat the detection problem using multilinear algebra, designing a number of finite impulse response filters (FIR) maximizing the target response. The algorithm was applied to test footage of practice mines in the infrared.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
New algorithms for optimal reduction of technical risks
NASA Astrophysics Data System (ADS)
Todinov, M. T.
2013-06-01
The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.
The staircase method: integrals for periodic reductions of integrable lattice equations
NASA Astrophysics Data System (ADS)
van der Kamp, Peter H.; Quispel, G. R. W.
2010-11-01
We show, in full generality, that the staircase method (Papageorgiou et al 1990 Phys. Lett. A 147 106-14, Quispel et al 1991 Physica A 173 243-66) provides integrals for mappings, and correspondences, obtained as traveling wave reductions of (systems of) integrable partial difference equations. We apply the staircase method to a variety of equations, including the Korteweg-De Vries equation, the five-point Bruschi-Calogero-Droghei equation, the quotient-difference (QD)-algorithm and the Boussinesq system. We show that, in all these cases, if the staircase method provides r integrals for an n-dimensional mapping, with 2r, then one can introduce q <= 2r variables, which reduce the dimension of the mapping from n to q. These dimension-reducing variables are obtained as joint invariants of k-symmetries of the mappings. Our results support the idea that often the staircase method provides sufficiently many integrals for the periodic reductions of integrable lattice equations to be completely integrable. We also study reductions on other quad-graphs than the regular {\\ Z}^2 lattice, and we prove linear growth of the multi-valuedness of iterates of high-dimensional correspondences obtained as reductions of the QD-algorithm.
NASA Astrophysics Data System (ADS)
Dong, S.
2018-05-01
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R
The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.
Glint-induced false alarm reduction in signature adaptive target detection
NASA Astrophysics Data System (ADS)
Crosby, Frank J.
2002-07-01
The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
The "War Poets": Evolution of a Literary Conscience in World War I.
ERIC Educational Resources Information Center
Galambos, Ellen
1983-01-01
Pre-World War I poetry often used picturesque images which blinded people to the actual horrors of war. The war poets, who experienced the destruction of World War I, led the way in expressing new images of the devastation and death of war, rather than focusing on honor and glory. (IS)
Kangaslampi, Samuli; Punamäki, Raija-Leena; Qouta, Samir; Diab, Marwan; Peltonen, Kirsi
2016-12-01
Cognitive theories point to reduction in dysfunctional posttraumatic cognitions (PTCs) as one mechanism involved in recovery from posttraumatic stress symptoms (PTSS), yet research findings have shown individual differences in the recovery process. We tested the cognitive mediation hypothesis above in a previously published psychosocial group intervention among war-affected children. We also examined heterogeneity in children's PTCs during the intervention. We used a cluster randomized trial of Smith et al.'s (2002) teaching recovery techniques (TRT) intervention among 482 Palestinians 10-13 years of age (n = 242 for intervention group, n = 240 for control group). Children reported PTSS, PTCs, and depressive symptoms at baseline, midpoint, postintervention, and at 6-month follow-up. Path analysis results showed that TRT was not effective in reducing dysfunctional PTCs, and the reductions did not mediate intervention effects on PTSS. Using latent class growth analysis, we chose the model with 3 differing trajectories in the intervention group: high, decreasing, moderate, downward trending, and severe, stable levels of PTCs. Higher PTSS and depressive symptoms at baseline were associated with membership in the severe, stable trajectory. The intervention did not produce the kind of beneficial cognitive change needed in the cognitive mediation conceptualization. Nevertheless, cognitive changes differed substantially across children during the intervention, and were associated with their preintervention mental health status. These findings call for more detailed examination of the process of cognitive mediation. Copyright © 2016 International Society for Traumatic Stress Studies.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
The lifelong struggle of Finnish World War II veterans.
Nivala, Sirkka; Sarvimäki, Anneli
2015-01-01
In many countries veterans from World War II are growing old. Research has shown that war experiences continue to impact those who have been involved in war for a long time. The present study targets old injured war veterans from World War II in Finland. The aim of this study was to produce knowledge of the impact of war experiences and injuries on the lifespan of Finnish war veterans. The method used was grounded theory. Data were collected by interviewing 20 aged war veterans in their homes. The analysis resulted in four categories, with also subcategories: (1) lost childhood and youth; (2) war traumas impacting life; (3) starting life from scratch; and (4) finding one's own place. A substantive theory of war veterans' lifelong struggle for freedom throughout the lifespan was outlined. The war overshadowed the whole lifespan of the veterans, but in old age they finally felt free. Since war experiences vary depending on historical context, a formal theory would require additional research.
Long-term outcomes of war-related death of family members in Kosovar civilian war survivors.
Morina, Nexhmedin; Reschke, Konrad; Hofmann, Stefan G
2011-04-01
Exposure to war-related experiences can comprise a broad variety of experiences and the very nature of certain war-related events has generally been neglected. To examine the long-term outcomes of war-related death of family members, the authors investigated the prevalence rates of major depressive episode (MDE), anxiety disorders, and quality of life among civilian war survivors with or without war-related death of first-degree family members 9 years after the war in Kosovo. Compared to participants without war-related death of family members, those who had experienced such loss had signficantly higher prevalence rates of MDE, posttraumatic stress disorder, and generalized anxiety disorder, and reported a lower quality of life 9 years after the war. These results indicate that bereaved civilian survivors of war experience significant mental health problems many years after the war.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System.more » Selected papers were processed separately for inclusion in the Energy Science and Technology Database.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
War Finance: Economic and Historic Lessons
ERIC Educational Resources Information Center
Boldt, David J.; Kassis, Mary Mathewes
2004-01-01
In this article, the authors provide a historical review of how the U.S. government has funded its participation in major wars during the past 150 years. They focus attention on five conflicts--the Civil War, World War I, World War II, the Korean War and the Vietnam War. Those conflicts were funded in different ways, with each funding method…
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232
Identification of Flights for Cost-Efficient Climate Impact Reduction
NASA Technical Reports Server (NTRS)
Chen, Neil Y.; Kirschen, Philippe G.; Sridhar, Banavar; Ng, Hok K.
2014-01-01
The aircraft-induced climate impact has drawn attention in recent years. Aviation operations affect the environment mainly through the release of carbon-dioxide, nitrogen-oxides, and by the formation of contrails. Recent research has shown that altering trajectories can reduce aviation environmental cost by reducing Absolute Global Temperature Change Potential, a climate assessment metric that adapts a linear system for modeling the global temperature response to aviation emissions and contrails. However, these methods will increase fuel consumption that leads to higher fuel costs imposed on airlines. The goal of this work is to identify ights for which the environmental cost of climate impact reduction outweighs the increase in operational cost on an individual aircraft basis. Environmental cost is quanti ed using the monetary social cost of carbon. The increase in operational cost is considering cost of additional fuel usage only. For this paper, an algorithm has been developed that modi es the trajectories of ights to evaluate the e ect of environ- mental cost and operational cost of ights in the United States National Airspace System. The algorithm identi es ights for which the environmental cost of climate impact can be reduced and modi es their trajectories to achieve maximum environmental net bene t, which is the di erence between reduction in environmental cost and additional operational cost. The result shows on a selected day, 16% of the ights among eight major airlines, or 2,043 ights, can achieve environmental net bene t using weather forecast data, resulting in net bene t of around $500,000. The result also suggests that the long-haul ights would be better candidates for cost-ecient climate impact reduction than the short haul ights. The algorithm will help to identify the characteristics of ights that are capable of applying cost-ecient climate impact reduction strategy.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
36 CFR 1229.12 - What are the requirements during a state of war or threatened war?
Code of Federal Regulations, 2010 CFR
2010-07-01
... during a state of war or threatened war? 1229.12 Section 1229.12 Parks, Forests, and Public Property... § 1229.12 What are the requirements during a state of war or threatened war? (a) Destruction of records... war between the United States and any other nation or when hostile action appears imminent, the head...
77 FR 43117 - Meeting of the Cold War Advisory Committee for the Cold War Theme Study
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
... the Cold War Advisory Committee for the Cold War Theme Study AGENCY: National Park Service, Interior... Committee Act, 5 U.S.C. Appendix, that the Cold War Advisory Committee for the Cold War Theme Study will... National Park Service (NPS) concerning the Cold War Theme Study. DATES: The teleconference meeting will be...
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Liang, Lihua; Yuan, Jia; Zhang, Songtao; Zhao, Peng
2018-01-01
This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller.
2014-01-01
Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463
Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang
2007-11-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.
Liang, Lihua; Zhang, Songtao; Zhao, Peng
2018-01-01
This work presents optimal linear quadratic regulator (LQR) based on genetic algorithm (GA) to solve the two degrees of freedom (2 DoF) motion control problem in head seas for wave piercing catamarans (WPC). The proposed LQR based GA control strategy is to select optimal weighting matrices (Q and R). The seakeeping performance of WPC based on proposed algorithm is challenged because of multi-input multi-output (MIMO) system of uncertain coefficient problems. Besides the kinematical constraint problems of WPC, the external conditions must be considered, like the sea disturbance and the actuators (a T-foil and two flaps) control. Moreover, this paper describes the MATLAB and LabVIEW software plats to simulate the reduction effects of WPC. Finally, the real-time (RT) NI CompactRIO embedded controller is selected to test the effectiveness of the actuators based on proposed techniques. In conclusion, simulation and experimental results prove the correctness of the proposed algorithm. The percentage of heave and pitch reductions are more than 18% in different high speeds and bad sea conditions. And the results also verify the feasibility of NI CompactRIO embedded controller. PMID:29709008
Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven
2010-05-01
Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.
A model reduction approach to numerical inversion for a parabolic partial differential equation
NASA Astrophysics Data System (ADS)
Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail
2014-12-01
We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.
Generation Algorithm of Discrete Line in Multi-Dimensional Grids
NASA Astrophysics Data System (ADS)
Du, L.; Ben, J.; Li, Y.; Wang, R.
2017-09-01
Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
[Neurological changes related to malnutrition during the spanish civil war (1936-1939)].
Culebras, Jesús M
2014-04-01
In this lecture, given at the International Conferences on Neuroscience, in Quito, May 31st-June 1st of 2013, the topic of famine situations during the Spanish Civil War, 1936-1939, was approached. Madrid, the capital of Spain, was under food, water and milk rationing during that period. This situation led to conditions that showed the relationships between the nervous system and nutrition. The Madrilenian population was submitted to a real experiment of hyponutrition, similar to the one that may be reproduced at the laboratory. At the end of the war, the National Direction on Health and the Institute of Medical Investigations, with the collaboration of the Rockefeller Foundation, carried out a series of clinical and food consumption surveys among the Madrilenian population. There were three medical situations that were of particular relevance during the Civil War and after it: the pellagra epidemics, the onset of lathyrism, and the socalled Vallecas syndrome. The occurrence of pellagra cases was paramount because it allowed reconsidering all the unspecific symptoms observed from an already known vitamin deficiency. Pellagra became the most prevalent deficitrelated disease, and most clearly related to nutrition. Lathyrism is a chronic intoxication produced by the accumulation of neurotoxins. It is due to common intake of chickling peas (Lathyrus sativus). Chickling peas are toxic only if they represent more than 30% of the daily calories consumed for a prolonged period greater than two to three months. Lathyrism would reoccur in the Spanish population after the war, in 1941 and 1942, the so called "famine years", when due to the scarcity of foods chickling pea flour was again consumed in high amounts. Deficiency-related neuropathies observed in Madrid during the Civil War led to new and original clinical descriptions. In children from schools of the Vallecas neighborhood, a deficiency syndrome, likely related to vitamin B complex deficiency, was described, which manifested by muscle cramps and weakness, and was termed the Vallecas syndrome. Poor fat content in the diet and a light decrease in calcium levels, which were already very low, were observed in the group with cramps. Both the administration of tablets containing an adequate amount of calcium and phosphorus and the daily intake of 4-6 milligrams of thiamine, achieved a considerable reduction in the frequency and severity of the cramps, or their complete resolution. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
DC Bus Regulation with a Flywheel Energy Storage System
NASA Technical Reports Server (NTRS)
Kenny, Barbara H.; Kascak, Peter E.
2003-01-01
This paper describes the DC bus regulation control algorithm for the NASA flywheel energy storage system during charge, charge reduction and discharge modes of operation. The algorithm was experimentally verified with results given in a previous paper. This paper presents the necessary models for simulation with detailed block diagrams of the controller algorithm. It is shown that the flywheel system and the controller can be modeled in three levels of detail depending on the type of analysis required. The three models are explained and then compared using simulation results.
Parallel Algorithms and Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
How Much War Should Be Included in a Course on World War II?
ERIC Educational Resources Information Center
Schilling, Donald G.
1993-01-01
Contends that end of Cold War increases need for students to understand causes and aftermath of World War II. Recommends spending less time on military aspects of the war and more time on the economic, social, and cultural impact of total war. Provides a selected list of resources to be used in a college level course on the war. (CFR)
Desai, Prajakta; Desai, Aniruddha
2017-01-01
Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies. PMID:28792513
Desai, Prajakta; Loke, Seng W; Desai, Aniruddha
2017-01-01
Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Using optical flow for the detection of floating mines in IR image sequences
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Acheroy, Marc
2006-09-01
In the first Gulf War, unmoored floating mines proved to be a real hazard for shipping traffic. An automated system capable of detecting these and other free-floating small objects, using readily available sensors such as infra-red cameras, would prove to be a valuable mine-warfare asset, and could double as a collision avoidance mechanism, and a search-and-rescue aid. The noisy background provided by the sea surface, and occlusion by waves make it difficult to detect small floating objects using only algorithms based upon the intensity, size or shape of the target. This leads us to look at the sequence of images for temporal detection characteristics. The target's apparent motion is such a determinant, given the contrast between the bobbing motion of the floating object and the strong horizontal component present in the propagation of the wavefronts. We have applied the Proesmans optical flow algorithm to IR video footage of practice mines, in order to extract the motion characteristic and a threshold on the vertical motion characteristic is then imposed to detect the floating targets.
Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet
2017-07-01
Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-01-01
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.
Extremum seeking-based optimization of high voltage converter modulator rise-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinker, Alexander; Bland, Michael; Krstic, Miroslav
2013-02-01
We digitally implement an extremum seeking (ES) algorithm, which optimizes the rise time of the output voltage of a high voltage converter modulator (HVCM) at the Los Alamos Neutron Science Center (LANSCE) HVCM test stand by iteratively, simultaneously tuning the first 8 switching edges of each of the three phase drive waveforms (24 variables total). We achieve a 50 μs rise time, which is reduction in half compared to the 100 μs achieved at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. Considering that HVCMs typically operate with an output voltage of 100 kV, with a 60Hz repetitionmore » rate, the 50 μs rise time reduction will result in very significant energy savings. The ES algorithm will prove successful, despite the noisy measurements and cost calculations, confirming the theoretical results that the algorithm is not affected by noise whose frequency components are independent of the perturbing frequencies.« less
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
Assessment of metal artifact reduction methods in pelvic CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki
2016-04-15
Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less
Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin
2013-12-01
Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.
Hearing through the noise: Biologically inspired noise reduction
NASA Astrophysics Data System (ADS)
Lee, Tyler Paul
Vocal communication in the natural world demands that a listener perform a remarkably complicated task in real-time. Vocalizations mix with all other sounds in the environment as they travel to the listener, arriving as a jumbled low-dimensional signal. A listener must then use this signal to extract the structure corresponding to individual sound sources. How this computation is implemented in the brain remains poorly understood, yet an accurate description of such mechanisms would impact a variety of medical and technological applications of sound processing. In this thesis, I describe initial work on how neurons in the secondary auditory cortex of the Zebra Finch extract song from naturalistic background noise. I then build on our understanding of the function of these neurons by creating an algorithm that extracts speech from natural background noise using spectrotemporal modulations. The algorithm, implemented as an artificial neural network, can be flexibly applied to any class of signal or noise and performs better than an optimal frequency-based noise reduction algorithm for a variety of background noises and signal-to-noise ratios. One potential drawback to using spectrotemporal modulations for noise reduction, though, is that analyzing the modulations present in an ongoing sound requires a latency set by the slowest temporal modulation computed. The algorithm avoids this problem by reducing noise predictively, taking advantage of the large amount of temporal structure present in natural sounds. This predictive denoising has ties to recent work suggesting that the auditory system uses attention to focus on predicted regions of spectrotemporal space when performing auditory scene analysis.
Two Procedures to Flag Radio Frequency Interference in the UV Plane
NASA Astrophysics Data System (ADS)
Sekhar, Srikrishna; Athreya, Ramana
2018-07-01
We present two algorithms to identify and flag radio frequency interference (RFI) in radio interferometric imaging data. The first algorithm utilizes the redundancy of visibilities inside a UV cell in the visibility plane to identify corrupted data, while varying the detection threshold in accordance with the observed reduction in noise with radial UV distance. In the second algorithm, we propose a scheme to detect faint RFI in the visibility time-channel (TC) plane of baselines. The efficacy of identifying RFI in the residual visibilities is reduced by the presence of ripples due to inaccurate subtraction of the strongest sources. This can be due to several reasons including primary beam asymmetries and other direction-dependent calibration errors. We eliminated these ripples by clipping the corresponding peaks in the associated Fourier plane. RFI was detected in the ripple-free TC plane but was flagged in the original visibilities. Application of these two algorithms to five different 150 MHz data sets from the GMRT resulted in a reduction in image noise of 20%–50% throughout the field along with a reduction in systematics and a corresponding increase in the number of detected sources. However, in comparing the mean flux densities before and after flagging RFI, we find a differential change with the fainter sources (25σ < S < 100 mJy) showing a change of ‑6% to +1% relative to the stronger sources (S > 100 mJy). We are unable to explain this effect, but it could be related to the CLEAN bias known for interferometers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim
Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less
Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian
2017-03-01
Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.
Surface horizontal logging for uranium and its decay products at a Superfund site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadeken, L.L.; Madigan, W.P.
1995-12-31
The United States Department of Energy (DOE) is now responsible for the environmental restoration and management of a number of sites where nuclear activities occurred during the Cold War. The DOE sponsored an Expedited Site Characterization performed by Ames Laboratory at the St. Louis (Missouri) Airport Site (SLAPS) during August--September, 1994. Uranium processing occurred at SLAPS during the Cold War and there is now significant residual radioactive contamination. Surveys associated the highest radioactivity levels at SLAPS with the ``barium cake`` (AJ-4) waste areas. This paper reports on continuous gamma ray spectroscopy measurements to identify the emitting, isotopes and to quantifymore » the amount of radioactivity present for each. An oilfield wireline gamma ray spectrometry sonde (the Compensated Spectral Natural Gamma instrument) was adapted to perform horizontal measurements with the detector section 3 ft above the soil surface. The CSNG detector is a 2-in.-diameter by 12-in.-long sodium iodide crystal. The spectrometry data are processed by a weighted-least-squares algorithm that incorporates whole spectrum responses for the radioisotopes of interest. The radioactivities are reported in pCi/g units for each isotope, and a depth-of-emission estimate is found by Compton-downscattering spectral shape analysis.« less
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
"Idiots, infants, and the insane": mental illness and legal incompetence
Szasz, T
2005-01-01
Prior to the second world war, most persons confined in insane asylums were regarded as legally incompetent and had guardians appointed for them. Today, most persons confined in mental hospitals (or treated involuntarily, committed to outpatient treatment) are, in law, competent; nevertheless, in fact, they are treated as if they were incompetent. Should the goal of mental health policy be providing better psychiatric services to more and more people, or the reduction and ultimate elimination of the number of persons in the population treated as mentally ill? PMID:15681670
Divorce: Using Psychologists' Skills for Transformation and Conflict Reduction.
Zimmerman, Jeffrey
2016-05-01
The litigious divorce process often leaves children with parents who are at "war" and have little ability to coparent effectively. This article discusses some of the Alternative Dispute Resolution (ADR) processes designed to lessen conflict both before and after divorce. It also addresses the important work of psychologists serving in the roles of child therapists and reunification clinicians doing the difficult work of helping to heal fractured child-parent relationships. Ethical challenges are addressed and future directions for applied research are suggested. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen
2018-01-01
We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Why don’t you use Evolutionary Algorithms in Big Data?
NASA Astrophysics Data System (ADS)
Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga
2017-02-01
In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.
NASA Astrophysics Data System (ADS)
Khaimovich, A. I.; Khaimovich, I. N.
2018-01-01
The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed to form generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity.
The influence of the war on perinatal and maternal mortality in Bosnia and Herzegovina.
Fatusić, Z; Kurjak, A; Grgić, G; Tulumović, A
2005-10-01
To investigate the influence of the war on perinatal and maternal mortality during the war conflict in Bosnia and Herzegovina. In a retrospective study we analysed perinatal and maternal mortality in the pre-war period (1988-1991), the war period (1992-1995) and the post-war period (1996-2003). We also analysed the number of deliveries, the perinatal and maternal mortality rates and their causes. During the analysed period we had a range of 3337-6912 deliveries per year, with a decreased number in the war period. During the war period and immediately after the war, the perinatal mortality rate increased to 20.9-26.3% (average 24.28%). After the war the rate decreased to 8.01% in 2003 (p < 0.05). Maternal mortality before the war was 39/100,000 deliveries, during the war it increased to 65/100,000 and after the war it decreased to 12/100,000 deliveries (p < 0.05). The increase in maternal mortality during the war was because of an increased number of uterine ruptures, sepsis and bleeding due to shell injury of pregnant women. During the war we could expect a decreased number of deliveries, and an increased rate of perinatal and maternal mortality and preterm deliveries due to: inadequate nutrition, stress factors (life in refugee's centers, bombing, deaths of relatives, uncertain future...), and break down of the perinatal care system (lack of medical staff, impossibility of collecting valid health records, particularly perinatal information, and the destruction of medical buildings).
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
Online Normalization Algorithm for Engine Turbofan Monitoring
2014-10-02
Online Normalization Algorithm for Engine Turbofan Monitoring Jérôme Lacaille 1 , Anastasios Bellas 2 1 Snecma, 77550 Moissy-Cramayel, France...understand the behavior of a turbofan engine, one first needs to deal with the variety of data acquisition contexts. Each time a set of measurements is...it auto-adapts itself with piecewise linear models. 1. INTRODUCTION Turbofan engine abnormality diagnosis uses three steps: reduction of
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
Karam, Elie G; Fayyad, John; Nasser Karam, Aimee; Cordahi Tabet, Caroline; Melhem, Nadine; Mneimneh, Zeina; Dimassi, Hani
2008-01-01
The purpose of this study was to examine the effectiveness and specificity of a classroom-based psychosocial intervention after war. All students (n=2500) of six villages in Southern Lebanon designated as most heavily exposed to war received a classroom-based intervention delivered by teachers, consisting of cognitive-behavioural and stress inoculation training strategies. A random sample of treated students (n=101) and a matched control group (n=93) were assessed one month post-war and one year later. Mental disorders and psychosocial stressors were assessed using the Diagnostic Interview for Children and Adolescents - Revised with children and parents. War exposure was measured using the War Events Questionnaire. The prevalence of major depressive disorder (MDD), separation anxiety disorder (SAD) and post-traumatic stress disorder (PTSD) was examined pre-war, one month post-war (pre-intervention), and one year post-war. Specificity of treatment was determined by rating teachers' therapy diaries. The rates of disorders peaked one month post-war and decreased over one year. There was no significant effect of the intervention on the rates of MDD, SAD or PTSD. Post-war MDD, SAD and PTSD were associated with pre-war SAD and PTSD, family violence parameters, financial problems and witnessing war events. These findings have significant policy and public health implications, given current practices of delivering universal interventions immediately post-war.
Objective Measures of Listening Effort: Effects of Background Noise and Noise Reduction
ERIC Educational Resources Information Center
Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin
2009-01-01
Purpose: This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. Method: To address this, the hypothesis…
The Development of Mobile Application to Introduce Historical Monuments in Manado
NASA Astrophysics Data System (ADS)
Rupilu, Moshe Markhasi; Suyoto; Santoso, Albertus Joko
2018-02-01
Learning the historical value of a monument is important because it preserves cultural and historical values, as well as expanding our personal insight. In Indonesia, particularly in Manado, North Sulawesi, there are many monuments. The monuments are erected for history, religion, culture and past war, however these aren't written in detail in the monuments. To get information on specific monument, manual search was required, i.e. asking related people or sources. Based on the problem, the development of an application which can utilize LBS (Location Based Service) method and some algorithmic methods specifically designed for mobile devices such as Smartphone, was required so that information on every monument in Manado can be displayed in detail using GPS coordinate. The application was developed by KNN method with K-means algorithm and collaborative filtering to recommend monument information to tourist. Tourists will get recommended options filtered by distance. Then, this method was also used to look for the closest monument from user. KNN algorithm determines the closest location by making comparisons according to calculation of longitude and latitude of several monuments tourist wants to visit. With this application, tourists who want to know and find information on monuments in Manado can do them easily and quickly because monument information is recommended directly to user without having to make selection. Moreover, tourist can see recommended monument information and search several monuments in Manado in real time.
The Effect of War on Children.
ERIC Educational Resources Information Center
Goldson, Edward
1996-01-01
This paper discusses the effects of modern war on children in the 20th century, focusing on direct and indirect effects of World War II, Vietnam War, war in Afghanistan, conflicts in Africa and in Central America, and Persian Gulf War. The paper notes the devastating effects on children of disruption of education and other public services in…
Resurrecting Limited War Theory
2008-05-01
indirectly with an appreciation of the principles and guidelines for limited war. 15. SUBJECT TERMS Limited War, Political Objectives, Total War...conflict between other nations may require the United States to act indirectly with an appreciation of the principles and guidelines for limited war...in war, echoing Clausewitz’s principle of political primacy. Like Clausewitz, he was also a student of
Women and War, Children and War: Stretching the Bonds of Caregiving.
ERIC Educational Resources Information Center
McNamee, Abigail S.
Many things stretch the bonds between caregiver and child, such as war, stress, and trauma. This paper reviews the literature on children who are in direct contact with war or indirect contact with war through television or others' conversations. It also describes the effects of war on children and their families, and children's psychological…
ERIC Educational Resources Information Center
Cooper, B. Lee
1992-01-01
Explores the differing lyrical perceptions of war and military activity depicted in popular songs during World War I, World War II, the Vietnam War, and the Persian Gulf War. The role of music in reinforcing patriotism is discussed, as well as the antiwar sentiment of the Vietnam era. (31 references) (LRW)
Climatic Effects of Regional Nuclear War
NASA Technical Reports Server (NTRS)
Oman, Luke D.
2011-01-01
We use a modern climate model and new estimates of smoke generated by fires in contemporary cities to calculate the response of the climate system to a regional nuclear war between emerging third world nuclear powers using 100 Hiroshima-size bombs (less than 0.03% of the explosive yield of the current global nuclear arsenal) on cities in the subtropics. We find significant cooling and reductions of precipitation lasting years, which would impact the global food supply. The climate changes are large and longlasting because the fuel loadings in modern cities are quite high and the subtropical solar insolation heats the resulting smoke cloud and lofts it into the high stratosphere, where removal mechanisms are slow. While the climate changes are less dramatic than found in previous "nuclear winter" simulations of a massive nuclear exchange between the superpowers, because less smoke is emitted, the changes seem to be more persistent because of improvements in representing aerosol processes and microphysical/dynamical interactions, including radiative heating effects, in newer global climate system models. The assumptions and calculations that go into these conclusions will be described.
Common sense and nuclear peace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharias, J.R.; Gordon, M.; Davis, S.R.
The authors note that arms control, arms limitation, and arms reduction, though moving in the right direction, are not sufficient. Possible escalation of a small-scale war between nuclear superpowers or of an accident into a major conflagration is our greatest worry, they think, even though there is no justification for ever resorting to nuclear weaponry. We must find new ways to prevent and resolve conflict among nations other than military solutions. Further, the stakes are too high to leave the settling of international disputes in the hands of the world's military establishments, or even its anti-military establishments. To reduce andmore » then eliminate the risk of nuclear war will require a profound change in attitude, both here and in the Soviet Union. The authors feel we must both learn to bargain and barter about the things that really matter. We must realize that we cannot compete on both economic and military trufs simultaneously, because they require different attitudes and policies. Good treaties based on commonality of interest make good neighbors, and can lead to a less-belligerent world. 17 references.« less
Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2014-01-01
Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697
Issues for Future Nuclear Arms Control
NASA Astrophysics Data System (ADS)
Davis, Jay
2011-04-01
Ratification of the New START treaty may open the door to a path of progressive negotiations that could lead to systematic reduction of the numbers of deployed and reserve nuclear weapons. Those negotiations will require more than merely resolving technical, operational and policy questions. Their success will also demand adding successively larger numbers of partners and the building of trust among parties who have not been involved in such agreements before. At some point, questions of conventional arms limitations and larger confidence building steps will inevitably arise. Jay Davis, who last year chaired an APS/POPA study of technology issues for future nuclear arms control agreements, will outline the path, opportunities, and obstacles that lie ahead. Davis was an UNSCOM inspector in Iraq after the First Gulf War and the first director of the Defense Threat Reduction Agency.
NASA Astrophysics Data System (ADS)
Narimani, M.; Sadeghieh Ahari, S.; Rajabi, S.
This research aims to determine efficacy of two therapeutic methods and compare them; Eye Movement, Desensitization and Reprocessing (EMDR) and Cognitive Behavioral Therapy (CBT) for reduction of anxiety and depression of Iranian combatant afflicted with Post traumatic Stress Disorder (PTSD) after imposed war. Statistical population of current study includes combatants afflicted with PTSD that were hospitalized in Isar Hospital of Ardabil province or were inhabited in Ardabil. These persons were selected through simple random sampling and were randomly located in three groups. The method was extended test method and study design was multi-group test-retest. Used tools include hospital anxiety and depression scale. This survey showed that exercise of EMDR and CBT has caused significant reduction of anxiety and depression.
Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun
2017-03-23
The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.
Conley, Marguerite; Le Fevre, Lauren; Haywood, Cilla; Proietto, Joseph
2018-02-01
The 5:2 diet (two non-consecutive days of 2460 KJ (600 calories) and 5 days of ad libitum eating per week) is becoming increasingly popular. This pilot study aimed to determine whether the 5:2 diet can achieve ≥5% weight loss and greater improvements in weight and biochemical markers than a standard energy-restricted diet (SERD) in obese male war veterans. A total of 24 participants were randomised to consume either the 5:2 diet or a SERD (2050 KJ (500 calorie) reduction per day) for 6 months. Weight, waist circumference (WC), fasting blood glucose, blood lipids, blood pressure and dietary intake were measured at baseline, 3 and 6 months by a blinded investigator. After 6 months, participants in both groups significantly reduced body weight (P = <0.001), WC (P = <0.001) and systolic blood pressure (P = 0.001). Mean weight loss was 5.3 ± 3.0 kg (5.5 ± 3.2%) for the 5:2 group and 5.5 ± 4.3 kg (5.4 ± 4.2%) for the SERD group. Mean WC reduction for the 5:2 group was 8.0 ± 4.5 and 6.4 ± 5.8 cm for the SERD group. There was no significant difference in the amount of weight loss or WC reduction between diet groups. There was no significant change in diastolic blood pressure, fasting blood glucose or blood lipids in either dietary group. Results suggest that the 5:2 diet is a successful but not superior weight loss approach in male war veterans when compared to a SERD. Future research is needed to determine the long-term effectiveness of the 5:2 diet and its effectiveness in other population groups. © 2017 Dietitians Association of Australia.
All-Cause Mortality Among US Veterans of the Persian Gulf War: 13-Year Follow-up.
Barth, Shannon K; Kang, Han K; Bullman, Tim
2016-11-01
We determined cause-specific mortality prevalence and risks of Gulf War deployed and nondeployed veterans to determine if deployed veterans were at greater risk than nondeployed veterans for death overall or because of certain diseases or conditions up to 13 years after conflict subsided. Follow-up began when the veteran left the Gulf War theater or May 1, 1991, and ended on the date of death or December 31, 2004. We studied 621 901 veterans who served in the 1990-1991 Persian Gulf War and 746 247 veterans who served but were not deployed during the Gulf War. We used Cox proportional hazard models to calculate rate ratios adjusted for age at entry to follow-up, length of follow-up, race, sex, branch of service, and military unit. We compared the mortality of (1) Gulf War veterans with non-Gulf War veterans and (2) Gulf War army veterans potentially exposed to nerve agents at Khamisiyah in March 1991 with those not exposed. We compared standardized mortality ratios of deployed and nondeployed Gulf War veterans with the US population. Male Gulf War veterans had a lower risk of mortality than male non-Gulf War veterans (adjusted rate ratio [aRR] = 0.97; 95% confidence interval [CI], 0.95-0.99), and female Gulf War veterans had a higher risk of mortality than female non-Gulf War veterans (aRR = 1.15; 95% CI, 1.03-1.28). Khamisiyah-exposed Gulf War army veterans had >3 times the risk of mortality from cirrhosis of the liver than nonexposed army Gulf War veterans (aRR = 3.73; 95% CI, 1.64-8.48). Compared with the US population, female Gulf War veterans had a 60% higher risk of suicide and male Gulf War veterans had a lower risk of suicide (standardized mortality ratio = 0.84; 95% CI, 0.80-0.88). The vital status and mortality risk of Gulf War and non-Gulf War veterans should continue to be investigated.
All-Cause Mortality Among US Veterans of the Persian Gulf War
Kang, Han K.; Bullman, Tim
2016-01-01
Objective: We determined cause-specific mortality prevalence and risks of Gulf War deployed and nondeployed veterans to determine if deployed veterans were at greater risk than nondeployed veterans for death overall or because of certain diseases or conditions up to 13 years after conflict subsided. Methods: Follow-up began when the veteran left the Gulf War theater or May 1, 1991, and ended on the date of death or December 31, 2004. We studied 621 901 veterans who served in the 1990-1991 Persian Gulf War and 746 247 veterans who served but were not deployed during the Gulf War. We used Cox proportional hazard models to calculate rate ratios adjusted for age at entry to follow-up, length of follow-up, race, sex, branch of service, and military unit. We compared the mortality of (1) Gulf War veterans with non–Gulf War veterans and (2) Gulf War army veterans potentially exposed to nerve agents at Khamisiyah in March 1991 with those not exposed. We compared standardized mortality ratios of deployed and nondeployed Gulf War veterans with the US population. Results: Male Gulf War veterans had a lower risk of mortality than male non–Gulf War veterans (adjusted rate ratio [aRR] = 0.97; 95% confidence interval [CI], 0.95-0.99), and female Gulf War veterans had a higher risk of mortality than female non–Gulf War veterans (aRR = 1.15; 95% CI, 1.03-1.28). Khamisiyah-exposed Gulf War army veterans had >3 times the risk of mortality from cirrhosis of the liver than nonexposed army Gulf War veterans (aRR = 3.73; 95% CI, 1.64-8.48). Compared with the US population, female Gulf War veterans had a 60% higher risk of suicide and male Gulf War veterans had a lower risk of suicide (standardized mortality ratio = 0.84; 95% CI, 0.80-0.88). Conclusion: The vital status and mortality risk of Gulf War and non–Gulf War veterans should continue to be investigated. PMID:28123229
Vindevogel, Sofie; Schryver, Maarten de; Broekaert, Eric; Derluyn, Ilse
2013-11-01
Armed conflict imposes huge hardship on young people living in war zones. This study assessed former child soldiers' experience and perception of stress in common war events during the armed conflict in northern Uganda and compares it with their non-recruited counterparts. To investigate whether child soldiers experienced more severe exposure to war events, and explore how war might affect youths differently, depending on the co-occurrence of these events. The study was undertaken in four northern Ugandan districts in 22 secondary schools with a sample size of 981 youths, about half of whom had been child soldiers. The participants completed a questionnaire on socio-demographic characteristics and stressful war events which was analyzed using descriptive statistics, a probabilistic index and correlation network analysis. Former child soldiers had significantly greater experience of war events than their non-recruited counterparts. The violence of war is more central in their experience and perception of stress, whereas the scarcity of resources and poor living conditions are most central for non-recruited participants. The extent to which a war event, such as separation from the family, is perceived as stressful depends on the experience and perception of other stressful war events, such as confrontation with war violence for former child soldiers and life in an Internally Displaced Persons' camp for non-recruited participants. The network approach permitted demonstration of the many ways in which war-affected youths encounter and appraise stressful war events. War events might function as moderators or mediators of the effect that other war events exert on the lives and well-being of young people living in war zones. This demands comprehensive and individualized assessment.
Soysa, Champika K; Azar, Sandra T
2016-01-01
Posttraumatic stress disorder (PTSD) in response to active war is understudied among Sinhalese children in Sri Lanka. We investigated PTSD symptom severity in children using child (n = 60) and mother (n = 60) reports; child-reported war exposure and coping; as well as self-reported maternal PTSD symptom severity. The study addressed active war in 2 rural locations (acute and chronic community war exposure). Child-reports were significantly greater than mother-reports of child PTSD symptom severity. Furthermore, children's war exposure, child-reported and mother-reported child PTSD symptom severity, and maternal PTSD symptom severity were significantly greater in the acute versus chronic community war exposure location, but children's approach and avoidance coping did not significantly differ, indicating a potential ceiling effect. Children's war exposure significantly, positively predicted child-reported child PTSD symptom severity, controlling for age, gender, and maternal PTSD symptom severity, but only maternal PTSD symptom severity significantly, positively predicted mother-reported child PTSD symptom severity. Avoidance coping (in both acute and chronic war) significantly positively mediated the children's war exposure-child-reported child PTSD symptom severity relation, but not mother-reports of the same. Approach coping (in chronic but not acute war) significantly, positively mediated the children's war exposure-child-reported and mother-reported child PTSD symptom severity relations. We advanced the literature on long-term active war by confirming the value of children's self-reports, establishing that both approach and avoidance coping positively mediated the war-exposure-PTSD symptom severity relation, and that the mediation effect of approach coping was situationally moderated by acute verses chronic community war exposure among Sri Lankan children. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William; Kerfoot, Ian
2001-10-01
The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.
NASA Technical Reports Server (NTRS)
Stephens, J. B.
1976-01-01
The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2004-01-01
This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
[The war at home: "war amenorrhea" in the First World War].
Stukenbrock, Karin
2008-01-01
In 1917, the Göttingen gynaecologist Dietrich published a short article about a phenomenon which he called "war amenorrhea" ("Kriegsamenorrhoe"). The article attracted the attention of his colleagues. While the affected women did not pay much attention to their amenorrhea, the physicians considered the phenomenon a new disease which was mainly caused by the war. This new disease gave the gynaecologists the opportunity to present their specialty as a discipline with high relevance for medicine in times of war. Nevertheless, there was no consensus about the importance, the incidence, the diagnostic criteria, the causes and the appropriate therapy of"war amenorrhea". Although the gynaecologists failed to define a uniform clinical syndrome, they maintained the construction of "war amenorrhea" after the war and subsumed it under well known types of amenorrhea. We can conclude that under the conditions of war a new disease emerged which was not sharply defined.
Ohura, Takehiko; Sanada, Hiromi; Mino, Yoshio
2004-01-01
In recent years, the concept of cost-effectiveness, including medical delivery and health service fee systems, has become widespread in Japanese health care. In the field of pressure ulcer management, the recent introduction of penalty subtraction in the care fee system emphasizes the need for prevention and cost-effective care of pressure ulcer. Previous cost-effectiveness research on pressure ulcer management tended to focus only on "hardware" costs such as those for pharmaceuticals and medical supplies, while neglecting other cost aspects, particularly those involving the cost of labor. Thus, cost-effectiveness in pressure ulcer care has not yet been fully established. To provide true cost effectiveness data, a comparative prospective study was initiated in patients with stage II and III pressure ulcers. Considering the potential impact of the pressure reduction mattress on clinical outcome, in particular, the same type of pressure reduction mattresses are utilized in all the cases in the study. The cost analysis method used was Activity-Based Costing, which measures material and labor cost aspects on a daily basis. A reduction in the Pressure Sore Status Tool (PSST) score was used to measure clinical effectiveness. Patients were divided into three groups based on the treatment method and on the use of a consistent algorithm of wound care: 1. MC/A group, modern dressings with a treatment algorithm (control cohort). 2. TC/A group, traditional care (ointment and gauze) with a treatment algorithm. 3. TC/NA group, traditional care (ointment and gauze) without a treatment algorithm. The results revealed that MC/A is more cost-effective than both TC/A and TC/NA. This suggests that appropriate utilization of modern dressing materials and a pressure ulcer care algorithm would contribute to reducing health care costs, improved clinical results, and, ultimately, greater cost-effectiveness.
NASA Astrophysics Data System (ADS)
Yousefian Jazi, Nima
Spatial filtering and directional discrimination has been shown to be an effective pre-processing approach for noise reduction in microphone array systems. In dual-microphone hearing aids, fixed and adaptive beamforming techniques are the most common solutions for enhancing the desired speech and rejecting unwanted signals captured by the microphones. In fact, beamformers are widely utilized in systems where spatial properties of target source (usually in front of the listener) is assumed to be known. In this dissertation, some dual-microphone coherence-based speech enhancement techniques applicable to hearing aids are proposed. All proposed algorithms operate in the frequency domain and (like traditional beamforming techniques) are purely based on the spatial properties of the desired speech source and does not require any knowledge of noise statistics for calculating the noise reduction filter. This benefit gives our algorithms the ability to address adverse noise conditions, such as situations where interfering talker(s) speaks simultaneously with the target speaker. In such cases, the (adaptive) beamformers lose their effectiveness in suppressing interference, since the noise channel (reference) cannot be built and updated accordingly. This difference is the main advantage of the proposed techniques in the dissertation over traditional adaptive beamformers. Furthermore, since the suggested algorithms are independent of noise estimation, they offer significant improvement in scenarios that the power level of interfering sources are much more than that of target speech. The dissertation also shows the premise behind the proposed algorithms can be extended and employed to binaural hearing aids. The main purpose of the investigated techniques is to enhance the intelligibility level of speech, measured through subjective listening tests with normal hearing and cochlear implant listeners. However, the improvement in quality of the output speech achieved by the algorithms are also presented to show that the proposed methods can be potential candidates for future use in commercial hearing aids and cochlear implant devices.
Caldarelli, G; Troiano, G; Rosadini, D; Nante, N
2017-01-01
The available laboratory tests for the differential diagnosis of prostate cancer, are represented by the total PSA, the free PSA, and the free/total PSA ratio. In Italy most of doctors tend to request both total and free PSA for their patients even in cases where the total PSA doesn't justify the further request of free PSA, with a consequent growth of the costs for the National Health System. The aim of our study was to predict the saving in Euro (due to reagents) and reduction in free PSA tests, applying the "PSA Reflex" algorithm. We calculated the number of total PSA and free PSA exams performed in 2014 in the Hospital of Grosseto and, simulating the application of the "PSA Reflex" algorithm in the same year, we calculated the decrease in the number of free PSA requests and we tried to predict the Euro savings in reagents, obtained from this reduction. In 2014 in the Hospital of Grosseto 25,955 total PSA tests have been performed: 3,631 (14%) resulted greater than 10 ng / ml; 7,686 (29.6%) between 2 and 10 ng / ml; 14,638 (56.4%) lower than 2 ng / ml. The performed free PSA tests were 16904. Simulating the use of "PSA Reflex" algorithm, the free PSA tests would be performed only in cases with total PSA values between 2 and 10 ng / mL with a saving of 54.5% of free PSA exams and of 8,971 euros, only for reagents. Our study showed that the "PSA Reflex" algorithm is a valid alternative leading to a reduction of the costs. The estimated intralaboratory savings, due to the reagents, seem to be modest, however, they are followed by the additional savings due to the other diagnostic processes for prostate cancers.
Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H
2017-07-01
The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.
First War Syndrome: Military Culture, Professionalization, and Counterinsurgency Doctrine
2010-02-01
in Ivan Musicant, The Banana Wars: A History of U.S. Intervention in Latin America from the Spanish-A merican War to the Invasion of Panama, (New...the " banana wars," many Marines grew very comfortable with these conflicts and Corps’ role as de facto imperial police force. 2 289 Millet, pp. 278-280...focus on the " banana wars" under the leadership of World War I veterans like John Lejeune. See also Bickel, pg. 54-55. 157 The war predictably disrupted
Bouncing Back From War Trauma: Resiliency in Global War on Terror’s Wounded Warriors
2015-02-11
Casualties of the Global War on Terror and Their Future Impact on Health Care and Society: A Looming Public Health Crisis .” Military Medicine 179 (April...Casualties of the Global War on Terror and Their Future Impact on Health Care and Society: A Looming Public Health Crisis .” Military Medicine 179...AIR WAR COLLEGE AIR UNIVERSITY BOUNCING BACK FROM WAR TRAUMA: RESILIENCY IN GLOBAL WAR ON TERROR’S WOUNDED WARRIORS by Katherine H
Lande, R Gregory
2008-06-01
This article explores America's historical experience with medical disability compensation programs during the Revolutionary War and the Civil War. Contemporary newspaper reports, complemented by book and journal articles, provide an understanding of the medical disability compensation programs offered during the Revolutionary War and the Civil War. Military planners, politicians, and service members struggled to develop a fair and balanced medical disability compensation program during the Revolutionary War and the Civil War. Based on America's extensive experience with the Civil War Invalid Corps, an alternative for motivated military personnel could be developed.
Air Force, Cyberpower, Targeting: Airpower Lessons for an Air Force Cyberpower Targeting Theory
2013-06-01
apply in future war. Following World War I, Airmen at the Air Corps Tactical School (ACTS) developed an “Industrial Web Theory” for targeting to...throughout its use. The targeting theory was employed with mixed results from World War II through the Vietnam War. In the late 20th century, Colonel...A review of the Inter-War period, World War II, Korean War, and Desert Storm intends to evaluate airpower targeting theories in order to develop
Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M
2013-09-01
Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.
Cohen-Mazor, Meital; Mathur, Prabodh; Stanley, James R.L.; Mendelsohn, Farrell O.; Lee, Henry; Baird, Rose; Zani, Brett G.; Markham, Peter M.; Rocha-Singh, Krishna
2014-01-01
Objective: To evaluate the safety and effectiveness of different bipolar radiofrequency system algorithms in interrupting the renal sympathetic nerves and reducing renal norepinephrine in a healthy porcine model. Methods: A porcine model (N = 46) was used to investigate renal norepinephrine levels and changes to renal artery tissues and nerves following percutaneous renal denervation with radiofrequency bipolar electrodes mounted on a balloon catheter. Parameters of the radiofrequency system (i.e. electrode length and energy delivery algorithm), and the effects of single and longitudinal treatments along the artery were studied with a 7-day model in which swine received unilateral radiofrequency treatments. Additional sets of animals were used to examine norepinephrine and histological changes 28 days following bilateral percutaneous radiofrequency treatment or surgical denervation; untreated swine were used for comparison of renal norepinephrine levels. Results: Seven days postprocedure, norepinephrine concentrations decreased proportionally to electrode length, with 81, 60 and 38% reductions (vs. contralateral control) using 16, 4 and 2-mm electrodes, respectively. Applying a temperature-control algorithm with the 4-mm electrodes increased efficacy, with a mean 89.5% norepinephrine reduction following a 30-s treatment at 68°C. Applying this treatment along the entire artery length affected more nerves vs. a single treatment, resulting in superior norepinephrine reduction 28 days following bilateral treatment. Conclusion: Percutaneous renal artery application of bipolar radiofrequency energy demonstrated safety and resulted in a significant renal norepinephrine content reduction and renal nerve injury compared with untreated controls in porcine models. PMID:24875181
Persuasive History: A Critical Comparison of Television's "Victory at Sea" and "The World at War."
ERIC Educational Resources Information Center
Mattheisen, Donald J.
1992-01-01
Discusses the television series "Victory at Sea" and "The World at War" and their use in teaching about World War II. Contrasts that war's glorious portrayal in "Victory at Sea" with the more ambiguous presentation of "The World at War." Suggests that students can learn a great deal about war and film itself…
Middle Adolescents' Views of War and American Military Involvement in the Persian Gulf.
ERIC Educational Resources Information Center
Schroeder, Daniel F.; And Others
1993-01-01
Surveyed 189 eleventh graders to assess attitudes toward war in general and attitudes toward Persian Gulf War. Reactions to war were associated with gender, race, and school setting. Students in study were slightly less supportive of war, in general, but more supportive of Gulf War than were past subjects on conflicts of their day (Vietnam and…
Combined Action Platoons in Vietnam
2012-04-27
Action Platoons; The US Marines’ Other War (New York: Praeger Publishers 1989) 2 Ibid. 3 Al Hemingway , Our War Was Different: Marine Combined...USMC Archives: Vietnam War Collection 1954-75 Box 7 folder 25 coll/3808 38 Al Hemingway , Our War Was...Platoons; The US Marines’ Other War (New York: Praeger Publishers 1989), 37 40 Al Hemingway , Our War Was Different: Marine Combined Action Platoons
Palosaari, Esa; Punamäki, Raija-Leena; Qouta, Samir; Diab, Marwan
2013-11-01
We tested the hypothesis that intergenerational effects of parents' war trauma on offspring's attachment and mental health are mediated by psychological maltreatment. Two hundred and forty children and their parents were sampled from a war-prone area, Gaza, Palestine. The parents reported the number and type of traumatic experiences of war they had had during their lifetime before the child's birth and during a current war when the child was 10-12 years old. The children reported their war traumas, experiences of psychological maltreatment, attachment security, and symptoms of posttraumatic stress (PTSS), depression, and aggression. The direct and indirect intergenerational effects of war trauma were tested in structural equation models. The hypotheses were confirmed for father's past war exposure, and disconfirmed for mother's war exposure. The father's past war trauma had a negative association with attachment security and positive association with the child's mental health problems mediated by increased psychological maltreatment. In contrast, the mother's past war trauma had a negative association with the child's depression via decreased psychological maltreatment. The mother's current war trauma had a negative association with the child's depression and aggression via decreased psychological maltreatment. Among fathers, past war exposure should be considered as a risk factor for psychological maltreatment of children and the associated attachment insecurity and mental health problems. Among mothers, war exposure as such could be given less clinical attention than PTSS in the prevention of psychological maltreatment of children. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Nonlocal Total-Variation-Based Speckle Filtering for Ultrasound Images.
Wen, Tiexiang; Gu, Jia; Li, Ling; Qin, Wenjian; Wang, Lei; Xie, Yaoqin
2016-07-01
Ultrasound is one of the most important medical imaging modalities for its real-time and portable imaging advantages. However, the contrast resolution and important details are degraded by the speckle in ultrasound images. Many speckle filtering methods have been developed, but they are suffered from several limitations, difficult to reach a balance between speckle reduction and edge preservation. In this paper, an adaptation of the nonlocal total variation (NLTV) filter is proposed for speckle reduction in ultrasound images. The speckle is modeled via a signal-dependent noise distribution for the log-compressed ultrasound images. Instead of the Euclidian distance, the statistical Pearson distance is introduced in this study for the similarity calculation between image patches via the Bayesian framework. And the Split-Bregman fast algorithm is used to solve the adapted NLTV despeckling functional. Experimental results on synthetic and clinical ultrasound images and comparisons with some classical and recent algorithms are used to demonstrate its improvements in both speckle noise reduction and tissue boundary preservation for ultrasound images. © The Author(s) 2015.
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Infrastructure system restoration planning using evolutionary algorithms
Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.
2016-01-01
This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.
Development of a Novel Locomotion Algorithm for Snake Robot
NASA Astrophysics Data System (ADS)
Khan, Raisuddin; Masum Billah, Md; Watanabe, Mitsuru; Shafie, A. A.
2013-12-01
A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation.
Ain't Gonna Study War No More? Explorations of War through Picture Books
ERIC Educational Resources Information Center
Crawford, Patricia A.; Roberts, Sherron Killingsworth
2009-01-01
At the height of the Vietnam War, Down by the Riverside was transformed from a traditional folk song to a popular anti-war anthem. The raucous and repetitive chorus, "I ain't gonna study war no more ...," became a rallying cry for those who wanted nothing to do with the war and the pain and controversy that surrounded it. Although it seems…
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.
Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W
1993-08-01
To evaluate an algorithm using two enzyme immunoassays (EIA) for anti-HIV-1 antibodies in a rural African population and to assess alternative simplified algorithms. Sera obtained from 7895 individuals in a rural population survey were tested using an algorithm based on two different EIA systems: Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant. Alternative algorithms were assessed using negative or confirmed positive sera. None of the 227 sera classified as unequivocably negative by the two assays were positive by Western blot. Of 192 sera unequivocably positive by both assays, four were seronegative by Western blot. The possibility of technical error cannot be ruled out in three of these. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative had a specificity of 100% and a sensitivity of 98.4%. The cost of this algorithm is one-third that of the conventional algorithm. Our evaluation suggests that high specificity and sensitivity can be obtained without using Western blot and at a considerable reduction in cost.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, Howard M.; Reed, Irving S.
1988-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
Effect of war on the menstrual cycle.
Hannoun, Antoine B; Nassar, Anwar H; Usta, Ihab M; Zreik, Tony G; Abu Musa, Antoine A
2007-04-01
To study the effect of a short period of war on the menstrual cycles of exposed women. Six months after a 16-day war, women in exposed villages aged 15-45 years were asked to complete a questionnaire relating to their menstrual history at the beginning, 3 months after, and 6 months after the war. A control group, not exposed to war, was also interviewed. The data collected were analyzed to estimate the effect of war on three groups of women: those who stayed in the war zone for 3-16 days (Group A), those who were displaced within 2 days to safer areas (Group B), and women not exposed to war or displacement (Group C-control). More than 35% of women in Group A and 10.5% in Group B had menstrual aberrations 3 months after the cessation of the war. These percentages were significantly different from each other and from that in Group C (2.6%). Six months after the war most women regained their regular menstrual cycles with the exception of 18.6% in Group A. We found a short period of war, acting like an acute stressful condition, resulted in menstrual abnormalities in 10-35% of women and is probably related to the duration of exposure to war. This might last beyond the war time and for more than one or two cycles. In most women the irregular cycles reversed without any medical intervention. II.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Cargo Insurance Facultative War Risk Cargo Insurance § 308.539 Application. (a) Preliminary request. Application for a... bind war risk in writing; for security reasons, if the applicant is submitting the order to bind war...
Gender, Attitudes Toward War, and Masculinities in Japan.
Morinaga, Yasuko; Sakamoto, Yuiri; Nakashima, Ken'ichiro
2017-06-01
Previous studies have argued that masculinity is linked to war. We conducted a web-based survey to examine relationships between gender, attitudes toward war, and masculinities within a sample of Japanese adults of both sexes ( N = 366). Our results indicated that while men were more likely than women to accept war, the relationship between attitudes toward war and masculinities was inconclusive. Moreover, the results suggested that favorable attitudes toward war among men could be attenuated by interpersonal orientations. Based on our findings, we recommend a reexamination of attitudes toward war within the Japanese population.
[Axel Ström--pioneer of social medicine and administrator].
Sundby, Per
2002-01-10
Dr Axel Strøm (1901-85), professor in the University of Oslo from 1940 to 1970, was a leader in Norwegian medicine in the latter half of the 20th century. He qualified in 1926 and in 1936 gained a doctorate with a dissertation on the toxin production of the Corynebacterium diphtheriae. His first appointment as a professor was in hygiene. In 1951 he moved on to public health, a field that he pioneered in Norway and the other Scandinavian countries. As a professor during the German occupation of Norway in the Second World War, he joined the university's resistance against the Nazi authorities' attempts at taking control. When the war was over he became deeply involved in research on the impact of war on health. At a time when the study of the impact of lifestyle factors was still in its infancy, he suggested that the war-induced reduction in dietary fat consumption might be the cause of observed lower cardiovascular mortality. Of more practical importance were the studies he initiated of the mainly psychological late-onset effects of traumas suffered by prisoners in German camps, seamen, soldiers and other exposed groups. In this area, too, he was an early explorer, of what has come to be known as post-traumatic stress disorder. His efforts led to improved war pension entitlements for the victims. Over the years, exposed groups became his major professional interest as a public health specialist. In his academic work, dr Strøm also pioneered medical ethics, care for the elderly, legislation on abortion, and the rapidly expanding field of the medical basis for social security benefits. As a practising physician he was in the vanguard of occupational medicine and other kinds of preventive medicine. What brought him most recognition was, however, his leading role over many years in the Norwegian Medical Association and in the University of Oslo. He served as chairman of the Junior Hospital Doctors Association, president of the Norwegian Medical Association and chairman of the Federation of Norwegian Professional Associations. He was elected dean of the Faculty of Medicine and vice-rector of the University of Oslo, in addition to a host of other expert assignments and official roles: He was renowned for his hard work and exerted great influence in many quarters.
Horn, Oded; Hull, Lisa; Jones, Margaret; Murphy, Dominic; Browne, Tess; Fear, Nicola T; Hotopf, Matthew; Rona, Roberto J; Wessely, Simon
2006-05-27
UK armed forces personnel who took part in the 1991 Gulf war experienced an increase in symptomatic ill health, colloquially known as Gulf war syndrome. Speculation about an Iraq war syndrome has already started. We compared the health of male regular UK armed forces personnel deployed to Iraq during the 2003 war (n=3642) with that of their colleagues who were not deployed (n=4295), and compared these findings with those from our previous survey after the 1991 war. Data were obtained by questionnaire. Graphs comparing frequencies of 50 non-specific symptoms in the past month in deployed and non-deployed groups did not show an increase in prevalence of symptoms equivalent to that observed after the Gulf war. For the Iraq war survey, odds ratios (ORs) for self-reported symptoms ranged from 0.8 to 1.3. Five symptoms were significantly increased, and two decreased, in deployed individuals, whereas prevalence greatly increased for all symptoms in the Gulf war study (ORs 1.9-3.9). Fatigue was not increased after the 2003 Iraq war (OR 1.08; 95% CI 0.98-1.19) but was greatly increased after the 1991 Gulf war (3.39; 3.00-3.83). Personnel deployed to the Gulf war were more likely (2.00, 1.70-2.35) than those not deployed to report their health as fair or poor; no such effect was found for the Iraq war (0.94, 0.82-1.09). Increases in common symptoms in the 2003 Iraq war group were slight, and no pattern suggestive of a new syndrome was present. We consider several explanations for these differences.