Sample records for algorithm project tmap

  1. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    PubMed

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  2. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    PubMed

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  3. A survey of psychiatrists' attitudes toward treatment guidelines.

    PubMed

    Healy, Daniel J; Goldman, Mona; Florence, Timothy; Milner, Karen K

    2004-04-01

    We developed a survey to look at psychiatrists' attitudes toward psychotropic prescribing guidelines, specifically the Texas Medication Algorithm Project (TMAP) algorithms. The 22-page survey was distributed to 24 psychiatrists working in 4 CMHC's; 13 completed the survey. 90% agreed that guidelines should be general and flexible. The majority also agreed that guidelines should define how to measure response to a specific agent; fewer agreed guidelines should specify dosage, side effect management, or augmentation strategies. Psychiatrists were familiar with TMAP; none referred to it in their practice. In spite of this, psychiatrists' medication preferences were similar to those suggested by guidelines.

  4. The art and science of switching antipsychotic medications, part 2.

    PubMed

    Weiden, Peter J; Miller, Alexander L; Lambert, Tim J; Buckley, Peter F

    2007-01-01

    In the presentation "Switching and Metabolic Syndrome," Weiden summarizes reasons to switch antipsychotics, highlighting weight gain and other metabolic adverse events as recent treatment targets. In "Texas Medication Algorithm Project (TMAP)," Miller reviews the TMAP study design, discusses results related to the algorithm versus treatment as usual, and concludes with the implications of the study. Lambert's presentation, "Dosing and Titration Strategies to Optimize Patient Outcome When Switching Antipsychotic Therapy," reviews the decision-making process when switching patients' medication, addresses dosing and titration strategies to effectively transition between medications, and examines other factors to consider when switching pharmacotherapy.

  5. The Texas medication algorithm project: clinical results for schizophrenia.

    PubMed

    Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven

    2004-01-01

    In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.

  6. Algorithms for optimizing the treatment of depression: making the right decision at the right time.

    PubMed

    Adli, M; Rush, A J; Möller, H-J; Bauer, M

    2003-11-01

    Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.

  7. Screening for mental illness: the merger of eugenics and the drug industry.

    PubMed

    Sharav, Vera Hassner

    2005-01-01

    The implementation of a recommendation by the President's New Freedom Commission (NFC) to screen the entire United States population--children first--for presumed, undetected, mental illness is an ill-conceived policy destined for disastrous consequences. The "pseudoscientific" methods used to screen for mental and behavioral abnormalities are a legacy from the discredited ideology of eugenics. Both eugenics and psychiatry suffer from a common philosophical fallacy that undermines the validity of their theories and prescriptions. Both are wed to a faith-based ideological assumption that mental and behavioral manifestations are biologically determined, and are, therefore, ameliorated by biological interventions. NFC promoted the Texas Medication Algorithm Project (TMAP) as a "model" medication treatment plan. The impact of TMAP is evident in the skyrocketing increase in psychotropic drug prescriptions for children and adults, and in the disproportionate expenditure for psychotropic drugs. The New Freedom Commission's screening for mental illness initiative is, therefore, but the first step toward prescribing drugs. The escalating expenditure for psychotropic drugs since TMAP leaves little doubt about who the beneficiaries of TMAP are. Screening for mental illness will increase their use.

  8. Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data

    DTIC Science & Technology

    2006-03-01

    January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs

  9. Aircraft Survivability: Aircraft Battle Damage and Repair, Summer 2007

    DTIC Science & Technology

    2007-01-01

    Modeling and Analysis Program ( TMAP ) Missile Modeling System for Advanced Investigation of Countermeasures (MOSAIC) & Joint Surface-to-Air Missle... TMAP Threat System Models (TSM) into engagement simulations (MOSAIC [IR] and JSAMS [RF]). This 3-year project will integrate and fully test six...three per engagement simulation) JASC priority TMAP TSMs in official releases of MOSAIC and JSAMS. Project Engineers— Luke Borntrager (USAF, AFRL) and

  10. The Texas Medication Algorithm Project Patient and Family Education Program: a consumer-guided initiative.

    PubMed

    Toprac, M G; Rush, A J; Conner, T M; Crismon, M L; Dees, M; Hopkins, C; Rowe, V; Shon, S P

    2000-07-01

    Educating patients with mental illness and their families about the illness and its treatment is essential to successful medication (disease) management. Specifically, education provides patients and families with the background they need to participate in treatment planning and implementation as full "partners" with clinicians. Thus, education increases the probability that appropriate and accurate treatment decisions will be made and that a treatment regimen will be followed. The Texas Medication Algorithm Project (TMAP) has incorporated these concepts into its philosophy of care and accordingly created a Patient and Family Education Program (PFEP) to complement the utilization of medication algorithms for the treatment of schizophrenic, bipolar, and major depressive disorders. This article describes how a team of mental health consumers, advocates, and professionals developed and implemented the PFEP. In keeping with the TMAP philosophy of care, consumers were true partners in the program's development and implementation. They not only created several components of the program and incorporated the consumer perspective, but they also served as program trainers and advocates. Initially, PFEP provides basic and subsequently more in-depth information about the illness and its treatment, including such topics as symptom monitoring and management and self-advocacy with one's treatment team. It includes written, pictorial, videotaped, and other media used in a phased manner by clinicians and consumer educators, in either individual or group formats.

  11. A computerized clinical decision support system as a means of implementing depression guidelines.

    PubMed

    Trivedi, Madhukar H; Kern, Janet K; Grannemann, Bruce D; Altshuler, Kenneth Z; Sunderajan, Prabha

    2004-08-01

    The authors describe the history and current use of computerized systems for implementing treatment guidelines in general medicine as well as the development, testing, and early use of a computerized decision support system for depression treatment among "real-world" clinical settings in Texas. In 1999 health care experts from Europe and the United States met to confront the well-documented challenges of implementing treatment guidelines and to identify strategies for improvement. They suggested the integration of guidelines into computer systems that is incorporated into clinical workflow. Several studies have demonstrated improvements in physicians' adherence to guidelines when such guidelines are provided in a computerized format. Although computerized decision support systems are being used in many areas of medicine and have demonstrated improved patient outcomes, their use in psychiatric illness is limited. The authors designed and developed a computerized decision support system for the treatment of major depressive disorder by using evidence-based guidelines, transferring the knowledge gained from the Texas Medication Algorithm Project (TMAP). This computerized decision support system (CompTMAP) provides support in diagnosis, treatment, follow-up, and preventive care and can be incorporated into the clinical setting. CompTMAP has gone through extensive testing to ensure accuracy and reliability. Physician surveys have indicated a positive response to CompTMAP, although the sample was insufficient for statistical testing. CompTMAP is part of a new era of comprehensive computerized decision support systems that take advantage of advances in automation and provide more complete clinical support to physicians in clinical practice.

  12. Implementation of the Texas Medication Algorithm Project patient and family education program.

    PubMed

    Toprac, Marcia G; Dennehy, Ellen B; Carmody, Thomas J; Crismon, M Lynn; Miller, Alexander L; Trivedi, Madhukar H; Suppes, Trisha; Rush, A John

    2006-09-01

    This article describes the implementation and utilization of the patient and family education program (PFEP) component of the Texas Medication Algorithm Project (TMAP). The extent of participation, types of psychoeducation received, and predictors of receiving at least a minimum level of education are presented. TMAP included medication guidelines, a dedicated clinical coordinator, standardized assessments of symptoms and side effects, uniform documentation, and a PFEP. The PFEP includes phased, multimodal, disorder-specific educational materials for patients and families. Participants were adult outpatients of 1 of 7 community mental health centers in Texas that were implementing the TMAP disease management package. Patients had DSM-IV clinical diagnoses of major depressive disorder, with or without psychotic features; bipolar I disorder or schizoaffective disorder, bipolar type; or schizophrenia or schizoaffective disorder. Assessments were administered by independent research coordinators. Study data were collected between March 1998 and March 2000, and patients participated for at least 1 year. Of the 487 participants, nearly all (95.1%) had at least 1 educational encounter, but only 53.6% of participants met criteria for "minimum exposure" to individual education interventions. Furthermore, only 31.0% participated in group education, and 42.5% had a family member involved in at least 1 encounter. Participants with schizophrenia were less involved in the PFEP across multiple indicators of utilization. Diagnosis, intensity of symptoms, age, and receipt of public assistance were related to the likelihood of exposure to minimum levels of individual education. Despite adequate resources and infrastructure to provide PFEP, utilization was less than anticipated. Although implementation guidelines were uniform across diagnoses, participants with schizophrenia experienced less exposure to psychoeducation. Recommendations for improving program implementation and modification of materials are discussed.

  13. Texas Medication Algorithm Project, phase 3 (TMAP-3): clinical results for patients with a history of mania.

    PubMed

    Suppes, Trisha; Rush, A John; Dennehy, Ellen B; Crismon, M Lynn; Kashner, T Michael; Toprac, Marcia G; Carmody, Thomas J; Brown, E Sherwood; Biggs, Melanie M; Shores-Wilson, Kathy; Witte, Bradley P; Trivedi, Madhukar H; Miller, Alexander L; Altshuler, Kenneth Z; Shon, Steven P

    2003-04-01

    The Texas Medication Algorithm Project (TMAP) assessed the clinical and economic impact of algorithm-driven treatment (ALGO) as compared with treatment-as-usual (TAU) in patients served in public mental health centers. This report presents clinical outcomes in patients with a history of mania (BD), including bipolar I and schizoaffective disorder, bipolar type, during 12 months of treatment beginning March 1998 and ending with the final active patient visit in April 2000. Patients were diagnosed with bipolar I disorder or schizoaffective disorder, bipolar type, according to DSM-IV criteria. ALGO was comprised of a medication algorithm and manual to guide treatment decisions. Physicians and clinical coordinators received training and expert consultation throughout the project. ALGO also provided a disorder-specific patient and family education package. TAU clinics had no exposure to the medication algorithms. Quarterly outcome evaluations were obtained by independent raters. Hierarchical linear modeling, based on a declining effects model, was used to assess clinical outcome of ALGO versus TAU. ALGO and TAU patients showed significant initial decreases in symptoms (p =.03 and p <.001, respectively) measured by the 24-item Brief Psychiatric Rating Scale (BPRS-24) at the 3-month assessment interval, with significantly greater effects for the ALGO group. Limited catch-up by TAU was observed over the remaining 3 quarters. Differences were also observed in measures of mania and psychosis but not in depression, side-effect burden, or functioning. For patients with a history of mania, relative to TAU, the ALGO intervention package was associated with greater initial and sustained improvement on the primary clinical outcome measure, the BPRS-24, and the secondary outcome measure, the Clinician-Administered Rating Scale for Mania (CARS-M). Further research is planned to clarify which elements of the ALGO package contributed to this between-group difference.

  14. UGV History 101: A Brief History of Unmanned Ground Vehicle (UGV) Development Efforts

    DTIC Science & Technology

    1995-01-01

    robots). These successful demonstrations led to the formulation of the Teleoperated Mobile Anti-Armor Platform ( TMAP ) program, and prototype systems were...Unfortunately, Congressional direction in December 1987 prohibited the emplacement of weapons systems on robots, and the TMAP was retargeted to the...Technology Demonstration project, a demonstration incorporating both the Army’s TMAPs and the GATERS TOV was held at Camp Pendleton in September 1989

  15. An Integration, Long Range Planning, and Migration Guide for the Stock Point Logistics Integrated Communications Project.

    DTIC Science & Technology

    1986-03-01

    and universal terminal/printer interface mapping ( TMAP ) software. When the Burroughs HYPERchannel software package (i.e., Burroughs NETEX) provided...and terminal device and security functions placed under the control of the FDC’s SAS/ TMAP processes. Without processing efficiency enhancements, TAPS...FDC’s SAS/ TMAP processes. As was also previously indicated, the performance of TAPS II on TANDEM is poor today, and there are questions as whether

  16. A Computerized Decision Support System for Depression in Primary Care

    PubMed Central

    Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha

    2009-01-01

    Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065

  17. A computerized decision support system for depression in primary care.

    PubMed

    Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha

    2009-01-01

    In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.

  18. HF Over-the-Horizon Radar System Performance Analysis

    DTIC Science & Technology

    2007-09-01

    system. Figure 37. The AN/TPS-71 ROTHR Transmission Array (From [42]). 42 A project named “terrain mapping” ( TMAP ) was initiated to improve...application of these ROTHRs is to support counterdrug (CD) aircraft surveillance and interdiction. The immediate operational application of the TMAP

  19. A comparison of guidelines for the treatment of schizophrenia.

    PubMed

    Milner, Karen K; Valenstein, Marcia

    2002-07-01

    Although the clinical and administrative rationales for the use of guidelines in the treatment of schizophrenia are convincing, meaningful implementation has been slow. Guideline characteristics themselves influence whether implementation occurs. The authors examine three widely distributed guidelines and one set of algorithms to compare characteristics that are likely to influence implementation, including their degree of scientific rigor, comprehensiveness, and clinical applicability (ease of use, timeliness, specificity, and ease of operationalizing). The three guidelines are the Expert Consensus Guideline Series' "Treatment of Schizophrenia"; the American Psychiatric Association's "Practice Guideline for the Treatment of Patients With Schizophrenia"; and the Schizophrenia Patient Outcomes Research Team (PORT) treatment recommendations. The algorithms are those of the Texas Medication Algorithm Project (TMAP). The authors outline the strengths of each and suggest how a future guideline might build on these strengths.

  20. Does provider adherence to a treatment guideline change clinical outcomes for patients with bipolar disorder? Results from the Texas Medication Algorithm Project.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; Rush, A John; Miller, Alexander L; Trivedi, Madhukar H; Crismon, M Lynn; Carmody, Thomas J; Kashner, T Michael

    2005-12-01

    Despite increasing adoption of clinical practice guidelines in psychiatry, there is little measurement of provider implementation of these recommendations, and the resulting impact on clinical outcomes. The current study describes one effort to measure these relationships in a cohort of public sector out-patients with bipolar disorder. Participants were enrolled in the algorithm intervention of the Texas Medication Algorithm Project (TMAP). Study methods and the adherence scoring algorithm have been described elsewhere. The current paper addresses the relationships between patient characteristics, provider experience with the algorithm, provider adherence, and clinical outcomes. Measurement of provider adherence includes evaluation of visit frequency, medication choice and dosing, and response to patient symptoms. An exploratory composite 'adherence by visit' score was developed for these analyses. A total of 1948 visits from 141 subjects were evaluated, and utilized a two-stage declining effects model. Providers with more experience using the algorithm tended to adhere less to treatment recommendations. Few patient factors significantly impacted provider adherence. Increased adherence to algorithm recommendations was associated with larger decreases in overall psychiatric symptoms and depressive symptoms over time, but did not impact either immediate or long-term reductions in manic symptoms. Greater provider adherence to treatment guideline recommendations was associated with greater reductions in depressive symptoms and overall psychiatric symptoms over time. Additional research is needed to refine measurement and to further clarify these relationships.

  1. The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2003 update.

    PubMed

    Miller, Alexander L; Hall, Catherine S; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Ereshefsky, Larry; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Rush, A John; Saeed, Sy A; Schooler, Nina R; Shon, Steven P; Stroup, Scott; Tarin-Godoy, Bernardo

    2004-04-01

    The Texas Medication Algorithm Project (TMAP) has been a public-academic collaboration in which guidelines for medication treatment of schizophrenia, bipolar disorder, and major depressive disorder were used in selected public outpatient clinics in Texas. Subsequently, these algorithms were implemented throughout Texas and are being used in other states. Guidelines require updating when significant new evidence emerges; the antipsychotic algorithm for schizophrenia was last updated in 1999. This article reports the recommendations developed in 2002 and 2003 by a group of experts, clinicians, and administrators. A conference in January 2002 began the update process. Before the conference, experts in the pharmacologic treatment of schizophrenia, clinicians, and administrators reviewed literature topics and prepared presentations. Topics included ziprasidone's inclusion in the algorithm, the number of antipsychotics tried before clozapine, and the role of first generation antipsychotics. Data were rated according to Agency for Healthcare Research and Quality criteria. After discussing the presentations, conference attendees arrived at consensus recommendations. Consideration of aripiprazole's inclusion was subsequently handled by electronic communications. The antipsychotic algorithm for schizophrenia was updated to include ziprasidone and aripiprazole among the first-line agents. Relative to the prior algorithm, the number of stages before clozapine was reduced. First generation antipsychotics were included but not as first-line choices. For patients refusing or not responding to clozapine and clozapine augmentation, preference was given to trying monotherapy with another antipsychotic before resorting to antipsychotic combinations. Consensus on algorithm revisions was achieved, but only further well-controlled research will answer many key questions about sequence and type of medication treatments of schizophrenia.

  2. TMAP/CKAP2 is essential for proper chromosome segregation.

    PubMed

    Hong, Kyung Uk; Kim, Eunhee; Bae, Chang-Dae; Park, Joobae

    2009-01-15

    Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton associated protein 2 (CKAP2), is a novel mitotic spindle-associated protein which is frequently up-regulated in various malignances. However, its cellular functions remain unknown. Previous reports suggested that the cellular functions of TMAP/CKAP2 pertain to regulation of the dynamics and assembly of the mitotic spindle. To investigate its role in mitosis, we studied the effects of siRNA-mediated depletion of TMAP/CKAP2 in cultured mammalian cells. Unexpectedly, TMAP/CKAP2 knockdown did not result in significant alterations of the spindle apparatus. However, TMAP/CKAP2-depleted cells often exhibited abnormal nuclear morphologies, which were accompanied by abnormal organization of the nuclear lamina, and chromatin bridge formation between two daughter cell nuclei. Time lapse video microscopy revealed that the changes in nuclear morphology and chromatin bridge formations observed in TMAP/CKAP2-depleted cells are the result of defects in chromosome segregation. Consistent with this, the spindle checkpoint activity was significantly reduced in TMAP/CKAP2-depleted cells. Moreover, chromosome missegregation induced by depletion of TMAP/CKAP2 ultimately resulted in reduced cell viability and increased chromosomal instability. Our present findings demonstrate that TMAP/CKAP2 is essential for proper chromosome segregation and for maintaining genomic stability.

  3. Spectroscopic studies on the interaction of a water-soluble cationic porphyrin with proteins

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Min; Chen, Xin; Zhang, Nuo; Han, Yan-Yan; Wu, Dan; Du, Bin; Wei, Qin

    2009-04-01

    The interaction of a water-soluble cationic porphyrin, meso-tetrakis (4- N, N, N-trimethylanilinium) porphyrin (TMAP), with two proteins, bovine serum albumin (BSA) and human serum albumin (HSA), was studied by UV-vis absorption spectroscopy, fluorescence spectroscopy, fluorescence anisotropy and synchronous fluorescence spectroscopy at neutral aqueous solutions. Free base TMAP bound to proteins as monomers and no aggregation was observed. The binding of TMAP quenched the fluorescence of the protein. On the contrary, the fluorescence of TMAP was enhanced and the fluorescence anisotropy increased due to the binding. The direct static binding mechanism could account for the quenching by TMAP and the binding constants were calculated. TMAP showed a higher quenching efficiency and binding constant of HSA than BSA. The binding of TMAP had no obvious effect on the molecular conformation of the protein. There was only one binding site for TMAP and it was located on the surface of the protein molecule. Electrostatic force played an important role in the binding due to the opposite charges on porphyrin and the proteins.

  4. Spectroscopic studies on the interaction of a water-soluble cationic porphyrin with proteins.

    PubMed

    Ma, Hong-Min; Chen, Xin; Zhang, Nuo; Han, Yan-Yan; Wu, Dan; Du, Bin; Wei, Qin

    2009-04-01

    The interaction of a water-soluble cationic porphyrin, meso-tetrakis (4-N,N,N-trimethylanilinium) porphyrin (TMAP), with two proteins, bovine serum albumin (BSA) and human serum albumin (HSA), was studied by UV-vis absorption spectroscopy, fluorescence spectroscopy, fluorescence anisotropy and synchronous fluorescence spectroscopy at neutral aqueous solutions. Free base TMAP bound to proteins as monomers and no aggregation was observed. The binding of TMAP quenched the fluorescence of the protein. On the contrary, the fluorescence of TMAP was enhanced and the fluorescence anisotropy increased due to the binding. The direct static binding mechanism could account for the quenching by TMAP and the binding constants were calculated. TMAP showed a higher quenching efficiency and binding constant of HSA than BSA. The binding of TMAP had no obvious effect on the molecular conformation of the protein. There was only one binding site for TMAP and it was located on the surface of the protein molecule. Electrostatic force played an important role in the binding due to the opposite charges on porphyrin and the proteins.

  5. Interaction of an Fe derivative of TMAP (Fe(TMAP)OAc) with DNA in comparison with free-base TMAP.

    PubMed

    Ghaderi, Masoumeh; Bathaie, S Zahra; Saboury, Ali-Akbar; Sharghi, Hashem; Tangestaninejad, Shahram

    2007-07-01

    We investigated the interaction of meso-tetrakis (N-para-methylanilium) porphyrin (TMAP) in its free base and Fe(II) form (Fe(TMAP)OAc) as a new derivative, with high molecular weight DNA at different ionic strengths, using various spectroscopic methods and microcalorimetry. The data obtained by spectrophotometery, circular dichroism (CD), fluorescence quenching and resonance light scattering (RLS) have demonstrated that TMAP association with DNA is via outside binding with self-stacking manner, which is accompanied with the "end-on" type complex formation in low ionic strength. However, in the case of Fe(TMAP)OAc, predominant mode of interaction is groove binding and after increasing in DNA concentration, unstable stacking-type aggregates are formed. In addition, isothermal titration calorimetric measurements have indicated the exothermic process of porphyrins binding to DNA, but the exothermisity in metal derivative of porphyrin is less than the free base. It confirmed the formation of a more organized aggregate of TMAP on DNA surface. Interactions of both porphyrins with DNA show high sensitivity to ionic strength. By addition of salt, the downfield CD signal of TMAP aggregates is shifted to a higher wavelength, which indicates some changes in the aggregates position. In the case of Fe(TMAP)OAc, addition of salt leads to changes in the mode of binding from groove binding to outside binding with self-stacking, which is accompanied with major changes in CD spectra, possibly indicating the formation of "face-on" type complex.

  6. A cytoskeleton-associated protein, TMAP/CKAP2, is involved in the proliferation of human foreskin fibroblasts.

    PubMed

    Jeon, Sang-Min; Choi, Bongkun; Hong, Kyung Uk; Kim, Eunhee; Seong, Yeon-Sun; Bae, Chang-Dae; Park, Joobae

    2006-09-15

    Previously, we reported the cloning of a cytoskeleton-associated protein, TMAP/CKAP2, which was up-regulated in primary human gastric cancers. Although TMAP/CKAP2 has been found to be expressed in most cancer cell lines examined, the function of CKAP2 is not known. In this study, we found that TMAP/CKAP2 was not expressed in G0/G1 arrested HFFs, but that it was expressed in actively dividing cells. After initiating the cell cycle, TMAP/CKAP2 levels remained low throughout most of the G1 phase, but gradually increased between late G1 and G2/M. Knockdown of TMAP/CKAP2 reduced pRB phosphorylation and increased p27 expression, and consequently reduced HFF proliferation, whereas constitutive TMAP/CKAP2 expression increased pRB phosphorylation and enhanced proliferation. Our results show that this novel cytoskeleton-associated protein is expressed cell cycle dependently and that it is involved in cell proliferation.

  7. The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2006 update.

    PubMed

    Moore, Troy A; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Robinson, Delbert G; Schooler, Nina R; Shon, Steven P; Stroup, T Scott; Miller, Alexander L

    2007-11-01

    A panel of academic psychiatrists and pharmacists, clinicians from the Texas public mental health system, advocates, and consumers met in June 2006 in Dallas, Tex., to review recent evidence in the pharmacologic treatment of schizophrenia. The goal of the consensus conference was to update and revise the Texas Medication Algorithm Project (TMAP) algorithm for schizophrenia used in the Texas Implementation of Medication Algorithms, a statewide quality assurance program for treatment of major psychiatric illness. Four questions were identified via premeeting teleconferences. (1) Should antipsychotic treatment of first-episode schizophrenia be different from that of multiepisode schizophrenia? (2) In which algorithm stages should first-generation antipsychotics (FGAs) be an option? (3) How many antipsychotic trials should precede a clozapine trial? (4) What is the status of augmentation strategies for clozapine? Subgroups reviewed the evidence in each area and presented their findings at the conference. The algorithm was updated to incorporate the following recommendations. (1) Persons with first-episode schizophrenia typically require lower antipsychotic doses and are more sensitive to side effects such as weight gain and extrapyramidal symptoms (group consensus). Second-generation antipsychotics (SGAs) are preferred for treatment of first-episode schizophrenia (majority opinion). (2) FGAs should be included in algorithm stages after first episode that include SGAs other than clozapine as options (group consensus). (3) The recommended number of trials of other antipsychotics that should precede a clozapine trial is 2, but earlier use of clozapine should be considered in the presence of persistent problems such as suicidality, comorbid violence, and substance abuse (group consensus). (4) Augmentation is reasonable for persons with inadequate response to clozapine, but published results on augmenting agents have not identified replicable positive results (group consensus). These recommendations are meant to provide a framework for clinical decision making, not to replace clinical judgment. As with any algorithm, treatment practices will evolve beyond the recommendations of this consensus conference as new evidence and additional medications become available.

  8. TMAP - A Versatile Mobile Robot

    NASA Astrophysics Data System (ADS)

    Weiss, Joel A.; Simmons, Richard K.

    1989-03-01

    TMAP, the Teleoperated Mobile All-purpose Platform, provides the Army with a low cost, light weight, flexibly designed, modularly expandable platform for support of maneuver forces and light infantry units. The highly mobile, four wheel drive, diesel-hydraulic platform is controllable at distances of up to 4km from a portable operator control unit using either fiber optic or RF control links. The Martin Marietta TMAP system is based on a hierarchical task decomposition Real-time Control System architecture that readily supports interchange of mission packages and provides the capability for simple incorporation of supervisory control concepts leading to increased system autonomy and resulting force multiplication. TMAP has been designed to support a variety of missions including target designation, anti-armor, anti-air, countermine, and reconnaissance/surveillance. As a target designation system TMAP will provide the soldier with increased survivability and effectiveness by providing substantial combat standoff, and the firepower effectiveness of several manual designator operators. Force-on-force analysis of simulated TMAP engagements indicate that TMAP should provide significant force multiplication for the Army in Air-Land Battle 2000.

  9. A Correlation of Welding Solidification Parameters to Weld Macrostructure

    DTIC Science & Technology

    1992-06-18

    BY THE START PROGRAMS. C C PROGRAM GVPLOT C DIMENSION TEMP(27,27,8),ZMELT(27,27),GRAD(27,27),V(27,27) DIMENSION TMAP (27,8),TMAP2(17,5),TEMPIMP(5...DATA GRAD /729*0./ DATA TMAP /216*0.0/ TMELT = 1770.0 79 READ(I) TIME READ(l) (((TEMP(I,J,K),I=1,27),J=1,27),K=1,8) READ(l) VTORCH C C C ACQUIRE A...MAP OF MAX TEMPERATURES IN (X,Z) IN ORDER TO DEFINE THE C FUSION ZONE DO 300 1=1,27 DO 300 J= 1,27 DO 300 K=1,8 IF (TEMP(I,J,K).GT.TMAP(I,9-K)) TMAP (I

  10. Specific primary sequence requirements for Aurora B kinase-mediated phosphorylation and subcellular localization of TMAP during mitosis.

    PubMed

    Kim, Hyun-Jun; Kwon, Hye-Rim; Bae, Chang-Dae; Park, Joobae; Hong, Kyung U

    2010-05-15

    During mitosis, regulation of protein structures and functions by phosphorylation plays critical roles in orchestrating a series of complex events essential for the cell division process. Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton-associated protein 2 (CKAP2), is a novel player in spindle assembly and chromosome segregation. We have previously reported that TMAP is phosphorylated at multiple residues specifically during mitosis. However, the mechanisms and functional importance of phosphorylation at most of the sites identified are currently unknown. Here, we report that TMAP is a novel substrate of the Aurora B kinase. Ser627 of TMAP was specifically phosphorylated by Aurora B both in vitro and in vivo. Ser627 and neighboring conserved residues were strictly required for efficient phosphorylation of TMAP by Aurora B, as even minor amino acid substitutions of the phosphorylation motif significantly diminished the efficiency of the substrate phosphorylation. Nearly all mutations at the phosphorylation motif had dramatic effects on the subcellular localization of TMAP. Instead of being localized to the chromosome region during late mitosis, the mutants remained associated with microtubules and centrosomes throughout mitosis. However, the changes in the subcellular localization of these mutants could not be completely explained by the phosphorylation status on Ser627. Our findings suggest that the motif surrounding Ser627 ((625) RRSRRL (630)) is a critical part of a functionally important sequence motif which not only governs the kinase-substrate recognition, but also regulates the subcellular localization of TMAP during mitosis.

  11. Hydrogen release from 800 MeV proton-irradiated tungsten

    NASA Astrophysics Data System (ADS)

    Oliver, B. M.; Venhaus, T. J.; Causey, R. A.; Garner, F. A.; Maloy, S. A.

    2002-12-01

    Tungsten irradiated in spallation neutron sources, such as those proposed for the accelerator production of tritium (APT) project, will contain large quantities of generated helium and hydrogen gas. Tungsten used in proposed fusion reactors will also be exposed to neutrons, and the generated protium will be accompanied by deuterium and tritium diffusing in from the plasma-facing surface. The release kinetics of these gases during various off-normal scenarios involving loss of coolant and after heat-induced rises in temperature are of particular interest for both applications. To determine the release kinetics of hydrogen from tungsten, tungsten rods irradiated with 800 MeV protons in the Los Alamos Neutron Science Center (LANSCE) to high exposures as part of the APT project have been examined. Hydrogen evolution from the tungsten has been measured using a dedicated mass-spectrometer system by subjecting the specimens to an essentially linear temperature ramp from ˜300 to ˜1500 K. Release profiles are compared with predictions obtained using the Tritium Migration Analysis Program (TMAP4). The measurements show that for high proton doses, the majority of the hydrogen is released gradually, starting at about 900 K and reaching a maximum at about 1400 K, where it drops fairly rapidly. Comparisons with TMAP show quite reasonable agreement using a trap energy of 1.4 eV and a trap density of ˜7%. There is a small additional release fraction occurring at ˜550 K, which is believed to be associated with low-energy trapping at or near the surface, and, therefore, was not included in the bulk TMAP model.

  12. Functional Importance of the Anaphase-Promoting Complex-Cdh1-Mediated Degradation of TMAP/CKAP2 in Regulation of Spindle Function and Cytokinesis▿ †

    PubMed Central

    Hong, Kyung Uk; Park, Young Soo; Seong, Yeon-Sun; Kang, Dongmin; Bae, Chang-Dae; Park, Joobae

    2007-01-01

    Cytoskeleton-associated protein 2 (CKAP2), also known as tumor-associated microtubule-associated protein (TMAP), is a novel microtubule-associated protein that is frequently upregulated in various malignances. However, its cellular functions remain unknown. A previous study has shown that its protein level begins to increase during G1/S and peaks at G2/M, after which it decreases abruptly. Ectopic overexpression of TMAP/CKAP2 induced microtubule bundling related to increased microtubule stability. TMAP/CKAP2 overexpression also resulted in cell cycle arrest during mitosis due to a defect in centrosome separation and subsequent formation of a monopolar spindle. We also show that degradation of TMAP/CKAP2 during mitotic exit is mediated by the anaphase-promoting complex bound to Cdh1 and that the KEN box motif near the N terminus is necessary for its destruction. Compared to the wild type, expression of a nondegradable mutant of TMAP/CKAP2 significantly increased the occurrence of spindle defects and cytokinesis failure. These results suggest that TMAP/CKAP2 plays a role in the assembly and maintenance of mitotic spindles, presumably by regulating microtubule dynamics, and its destruction during mitotic exit serves an important role in the completion of cytokinesis and in the maintenance of spindle bipolarity in the next mitosis. PMID:17339342

  13. Functional importance of the anaphase-promoting complex-Cdh1-mediated degradation of TMAP/CKAP2 in regulation of spindle function and cytokinesis.

    PubMed

    Hong, Kyung Uk; Park, Young Soo; Seong, Yeon-Sun; Kang, Dongmin; Bae, Chang-Dae; Park, Joobae

    2007-05-01

    Cytoskeleton-associated protein 2 (CKAP2), also known as tumor-associated microtubule-associated protein (TMAP), is a novel microtubule-associated protein that is frequently upregulated in various malignances. However, its cellular functions remain unknown. A previous study has shown that its protein level begins to increase during G(1)/S and peaks at G(2)/M, after which it decreases abruptly. Ectopic overexpression of TMAP/CKAP2 induced microtubule bundling related to increased microtubule stability. TMAP/CKAP2 overexpression also resulted in cell cycle arrest during mitosis due to a defect in centrosome separation and subsequent formation of a monopolar spindle. We also show that degradation of TMAP/CKAP2 during mitotic exit is mediated by the anaphase-promoting complex bound to Cdh1 and that the KEN box motif near the N terminus is necessary for its destruction. Compared to the wild type, expression of a nondegradable mutant of TMAP/CKAP2 significantly increased the occurrence of spindle defects and cytokinesis failure. These results suggest that TMAP/CKAP2 plays a role in the assembly and maintenance of mitotic spindles, presumably by regulating microtubule dynamics, and its destruction during mitotic exit serves an important role in the completion of cytokinesis and in the maintenance of spindle bipolarity in the next mitosis.

  14. Transient phosphorylation of tumor associated microtubule associated protein (TMAP)/cytoskeleton associated protein 2 (CKAP2) at Thr-596 during early phases of mitosis.

    PubMed

    Hong, Kyung Uk; Choi, Yong-Bock; Lee, Jung-Hwa; Kim, Hyun-Jun; Kwon, Hye-Rim; Seong, Yeon-Sun; Kim, Heung Tae; Park, Joobae; Bae, Chang-Dae; Hong, Kyeong-Man

    2008-08-31

    Tumor associated microtubule associated protein (TMAP), also known as cytoskeleton associated protein 2 (CKAP2) is a mitotic spindle-associated protein whose expression is cell cycle-regulated and also frequently deregulated in cancer cells. Two monoclonal antibodies (mAbs) against TMAP/CKAP2 were produced: B-1-13 and D-12-3. Interestingly, the reactivity of mAb D-12-3 to TMAP/CKAP2 was markedly decreased specifically in mitotic cell lysate. The epitope mapping study showed that mAb D-12-3 recognizes the amino acid sequence between 569 and 625 and that phosphorylation at T596 completely abolishes the reactivity of the antibody, suggesting that the differential reactivity originates from the phosphorylation status at T596. Immunofluorescence staining showed that mAb D-12-3 fails to detect TMAP/CKAP2 in mitotic cells between prophase and metaphase, but the staining becomes evident again in anaphase, suggesting that phosphorylation at T596 occurs transiently during early phases of mitosis. These results suggest that the cellular functions of TMAP/CKAP2 might be regulated by timely phosphorylation and dephosphorylation during the course of mitosis.

  15. Transient phosphorylation of tumor associated microtubule associated protein (TMAP)/cytoskeleton associated protein 2 (CKAP2) at Thr-596 during early phases of mitosis

    PubMed Central

    Hong, Kyung Uk; Choi, Yong-Bock; Lee, Jung-Hwa; Kim, Hyun-Jun; Kwon, Hye-Rim; Seong, Yeon-Sun; Kim, Heung Tae; Park, Joobae

    2008-01-01

    Tumor associated microtubule associated protein (TMAP), also known as cytoskeleton associated protein 2 (CKAP2) is a mitotic spindle-associated protein whose expression is cell cycle-regulated and also frequently deregulated in cancer cells. Two monoclonal antibodies (mAbs) against TMAP/CKAP2 were produced: B-1-13 and D-12-3. Interestingly, the reactivity of mAb D-12-3 to TMAP/CKAP2 was markedly decreased specifically in mitotic cell lysate. The epitope mapping study showed that mAb D-12-3 recognizes the amino acid sequence between 569 and 625 and that phosphorylation at T596 completely abolishes the reactivity of the antibody, suggesting that the differential reactivity originates from the phosphorylation status at T596. Immunofluorescence staining showed that mAb D-12-3 fails to detect TMAP/CKAP2 in mitotic cells between prophase and metaphase, but the staining becomes evident again in anaphase, suggesting that phosphorylation at T596 occurs transiently during early phases of mitosis. These results suggest that the cellular functions of TMAP/CKAP2 might be regulated by timely phosphorylation and dephosphorylation during the course of mitosis. PMID:18779650

  16. [Study on the aggregation behavior of cationic porphyrins and their interaction with ctDNA].

    PubMed

    Ma, Hong-Min; Chen, Xin; Sun, Shu-Ting; Zhang, Li-Na; Wu, Dan; Zhu, Pei-Hua; Li, Yan; Du, Bin; Wei, Qin

    2009-02-01

    Interest in the interaction between cationic porphyrins, particularly derivatives of meso-tetra(N-methylpyridinium-4-yl) porphyrin(TMPyP), and DNA abounds because they are versatile DNA-binding agents that could find application in photodynamic therapy, cancer detection, artificial nucleases, virus inhibition and so on. The interaction of two water-soluble cationic porphyrins, meso-tetrakis(4-N, N, N-trimethylanilinium) porphyrin (TMAP) and 5-phenyl-10,15,20-tris[4-(N-methyl) pyridinium]porphyrin (TriMPyP), with calf thymus DNA (ctDNA) was studied by UV-Vis absorption spectroscopy, fluorescence spectroscopy and resonance light scattering technique. TriMPyP forms aggregate in water due to the molecular asymmetry while TMAP exists as monomers. At lower concentrations of ctDNA (R > 1, R = c(TMAP)/c(DNA) base pair), the interaction of TMAP with DNA leads to significant hypochromicity and bathochromic shift of absorption spectra. And the fluorescence of TMAP was quenched while it showed enhanced resonance light scattering signals. But the extent of enhancement of resonance light scattering signals is very small, so the aggregate of TMAP is not very high. These observations indicate the self-stacking of TMAP along the DNA surface. At higher concentrations of ctDNA (R < 1), TMAP association with DNA is via outside binding which is accompanied with hyperchromic effect and fluorescence enhancement while the resonance light scattering signals is reduced. DNA addition decreases the fluorescence intensity of TriMPyP and it shifts the peak to the higher wavelengths (red shift). The interaction with DNA promotes the aggregation of TriMPyP and no simple outside binding is observed even at higher concentrations of ctDNA. The steric effect of molecular distortion constrains the intercalation or further binding to DNA. The effect of ionic strength on the interaction was investigated at two DNA concentrations, 1.2 and 24.0 micromol x L(-1), for TMAP. The Interactions of both porphyrins with DNA show high sensitivity to ionic strength. By addition of NaCl, electrostatic attraction is decreased, resulting in the change of binding mode.

  17. Evolving Intelligence, Surveillance & Reconnaissance (ISR) for Air Force Cyber Defense

    DTIC Science & Technology

    2013-02-14

    Telecommunications and Assessment Program ( TMAP ) notes in Air Force Instruction (AFI) 10-712 that “adversaries can easily monitor (unclassified) systems to...Instruction (AFI) 10-712, Telecommunications Monitoring And Assessment Program ( TMAP ), 2011, 4. 23. Lt Col Hugh M. Ragland., interview with author...Monitoring And Assessment Program ( TMAP ), 8 June 2011. Brenner, Carl N. Col, USAF. NASIC Air & Cyber Analysis Group/CC. Interview by the author. 29

  18. Transbilayer transport of a propyltrimethylammonium derivative of diphenylhexatriene (TMAP-DPH) in bovine blood platelets and adrenal chromaffin cells.

    PubMed

    Kitagawa, Shuji; Tachikawa, Eiichi; Kashimoto, Takashi

    2002-12-01

    The membrane fluorescent probe N-((4-(6-phenyl-1,3,5-hexatrienyl)phenyl)propyl)trimethylammonium (TMAP-DPH) has an additional three-carbon spacer between the fluorophore and the trimethylammonium substituent of 1-(4-trimethylammoniumphenyl)-6-phenyl-1,3,5-hexatriene (TMA-DPH). As a basic study to clarify the transport mechanism of amphiphilic quaternary ammoniums, we observed the characteristics of the transbilayer transport of TMAP-DPH in bovine blood platelets and bovine adrenal chromaffin cells using the albumin extraction method. We compared these inward transport rates with those of TMA-DPH. TMAP-DPH crossed into the cytoplasmic layers of the membranes more slowly than TMA-DPH after rapid binding to the outer halves of the plasma membranes. The transport rate markedly depended on temperature. Time to reach the half-maximal incorporated amount of TMAP-DPH increased threefold accompanied by an increase in the concentration from 0.2 to 1.5 microM. The transport was stimulated significantly by various types of membrane perturbations such as modification of sulfhydryl-groups by N-ethylmaleimide and benzyl alcohol-induced increase in the fluidity of the lipid bilayer. The saturation phenomenon suggested the presence of the regulatory process in the transbilayer transport of TMAP-DPH.

  19. Knowledge Representation for Decision Making Agents

    DTIC Science & Technology

    2013-07-15

    knowledge map. This knowledge map is a dictionary data structure called tmap in the code. It represents a network of locations with a number [0,1...fillRandom(): Informed initial tmap distribution (randomly generated per node) with belief one. • initialBelief = 3 uses fillCenter(): normal...triggered on AllMyFMsHaveBeenInitialized. 2. Executes main.py • Initializes knowledge map labeled tmap . • Calls initialize search() – resets distanceTot and

  20. Enhancement of the Daytime MODIS Based Icing Potential Using NOGAPS and COAMPS Model Data

    DTIC Science & Technology

    2007-09-01

    curves was performed on the various combinations to determine which product combination gave the best results. Two different available Tmap ...Alexander and CIP) were used and had mixed results. Contrary to what Cooper (2006) found where weighting RH and the Alexander Tmap produced the best results...this study found that equal weighting of T and RH and the CIP Tmap produced the same or better results than weighting RH. This study also found

  1. Transferrin-derived synthetic peptide induces highly conserved pro-inflammatory responses of macrophages.

    PubMed

    Haddad, George; Belosevic, Miodrag

    2009-02-01

    We examined the induction of macrophage pro-inflammatory responses by transferrin-derived synthetic peptide originally identified following digestion of transferrin from different species (murine, bovine, human N-lobe and goldfish) using elastase. The mass spectrometry analysis of elastase-digested murine transferrin identified a 31 amino acid peptide located in the N2 sub-domain of the transferrin N-lobe, that we named TMAP. TMAP was synthetically produced and shown to induce a number of pro-inflammatory genes by quantitative PCR. TMAP induced chemotaxis, a potent nitric oxide response, and TNF-alpha secretion in different macrophage populations; P338D1 macrophage-like cells, mouse peritoneal macrophages, mouse bone marrow-derived macrophages (BMDM) and goldfish macrophages. The treatment of BMDM cultures with TMAP stimulated the production of nine cytokines and chemokines (IL-6, MCP-5, MIP-1 alpha, MIP-1 gamma, MIP-2, GCSF, KC, VEGF, and RANTES) that was measured using cytokine antibody array and confirmed by Western blot. Our results indicate that transferrin-derived peptide, TMAP, is an immunomodulating molecule capable of inducing pro-inflammatory responses in lower and higher vertebrates.

  2. Perturbations in DNA structure upon interaction with porphyrins revealed by chemical probes, DNA footprinting and molecular modelling.

    PubMed

    Ford, K G; Neidle, S

    1995-06-01

    The interactions of several porphyrins with a 74 base-pair DNA sequence have been examined by footprinting and chemical protection methods. Tetra-(4-N-methyl-(pyridyl)) porphyrin (TMPy), two of its metal complexes and tetra-(4-trimethylanilinium) porphyrin (TMAP) bind to closely similar AT-rich sequences. The three TMPy ligands produce modest changes in DNA structure and base accessibility on binding, in contrast to the large-scale conformational changes observed with TMAP. Molecular modelling studies have been performed on TMPy and TMAP bound in the AT-rich minor groove of an oligonucleotide. These have shown that significant structural change is needed to accommodate the bulky trimethyl substituent groups of TMAP, in contrast to the facile minor groove fit of TMPy.

  3. Compressive Properties of Extruded Polytetrafluoroethylene

    DTIC Science & Technology

    2007-07-01

    against equivalent temperature ( Tmap ) at a single strain rate (3map). This is a pragmatic, empirically based line- arization and extension to large strains...one of the strain rates that was used in the experimental program, and in this case two rates were used: 0.1 s1 and 3200 s1. The value Tmap , is...defined as Tmap ¼ Texp þA log _3map log _3exp ð11Þ where the subscript exp indicates the experimental values of strain rate and temperature. A

  4. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  5. Clinical evaluation of MR temperature monitoring of laser-induced thermotherapy in human liver using the proton-resonance-frequency method and predictive models of cell death.

    PubMed

    Kickhefel, Antje; Rosenberg, Christian; Weiss, Clifford R; Rempp, Hansjörg; Roland, Joerg; Schick, Fritz; Hosten, Norbert

    2011-03-01

    To assess the feasibility, precision, and accuracy of real-time temperature mapping (TMap) during laser-induced thermotherapy (LITT) for clinical practice in patients liver with a gradient echo (GRE) sequence using the proton resonance frequency (PRF) method. LITT was performed on 34 lesions in 18 patients with simultaneous real-time visualization of relative temperature changes. Correlative contrast-enhanced T1-weighted magnetic resonance (MR) images of the liver were acquired after treatment using the same slice positions and angulations as TMap images acquired during LITT. For each slice, TMap and follow-up images were registered for comparison. Afterwards, segmentation based on temperature (T) >52°C on TMap and based on necrosis seen on follow-up images was performed. These segmented structures were overlaid and divided into zones where the TMap was found to either over- or underestimate necrosis on the postcontrast images. Regions with T>52°C after 20 minutes were defined as necrotic tissue based on data received from two different thermal dose models. The average intersecting region of TMap and necrotic zone was 87% ± 5%, the overestimated 13% ± 4%, and the underestimated 13% ± 5%. This study demonstrates that MR temperature mapping appears reasonably capable of predicting tissue necrosis on the basis of indicating regions having greater temperatures than 52°C and could be used to monitor and adjust the thermal therapy appropriately during treatment. Copyright © 2011 Wiley-Liss, Inc.

  6. Association between temporal mean arterial pressure and brachial noninvasive blood pressure during shoulder surgery in the beach chair position during general anesthesia.

    PubMed

    Triplet, Jacob J; Lonetta, Christopher M; Everding, Nathan G; Moor, Molly A; Levy, Jonathan C

    2015-01-01

    Estimation of cerebral perfusion pressure during elective shoulder surgery in the beach chair position is regularly performed by noninvasive brachial blood pressure (NIBP) measurements. The relationship between brachial mean arterial pressure and estimated temporal mean arterial pressure (eTMAP) is not well established and may vary with patient positioning. Establishing a ratio between eTMAP and NIBP at varying positions may provide a more accurate estimation of cerebral perfusion using noninvasive measurements. This prospective study included 57 patients undergoing elective shoulder surgery in the beach chair position. All patients received an interscalene block and general anesthesia. After the induction of general anesthesia, values for eTMAP and NIBP were recorded at 0°, 30°, and 70° of incline. A statistically significant, strong, and direct correlation between NIBP and eTMAP was found at 0° (r = 0.909, P ≤ .001), 30° (r = 0.874, P < .001), and 70° (r = 0.819, P < .001) of incline. The mean ratios of eTMAP to NIBP at 0°, 30°, and 70° of incline were 0.939 (95% confidence interval [CI], 0.915-0.964), 0.738 (95% CI, 0.704-0.771), and 0.629 (95% CI, 0.584-0.673), respectively. There was a statistically significant decrease in the eTMAP/NIBP ratio as patient incline increased from 0° to 30° (P < .001) and from 30° to 70° (P < .001). The eTMAP-to-NIBP ratio decreases as an anesthetized patient is placed into the beach chair position. Awareness of this phenomenon is important to ensure adequate cerebral perfusion and prevent hypoxic-related injuries. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  7. Acid-base equilibria inside amine-functionalized mesoporous silica.

    PubMed

    Yamaguchi, Akira; Namekawa, Manato; Kamijo, Toshio; Itoh, Tetsuji; Teramae, Norio

    2011-04-15

    Acid-base equilibria and effective proton concentration inside a silica mesopore modified with a trimethyl ammonium (TMAP) layer were studied by steady-state fluorescence experiments. The mesoporous silica with a dense TMAP layer (1.4 molecules/nm(2)) was prepared by a post grafting of N-trimethoxysilylpropyl-N,N,N-trimethylammonium at surfactant-templated mesoporous silica (diameter of silica framework =3.1 nm). The resulting TMAP-modified mesoporous silica strongly adsorbed of anionic fluorescence indicator dyes (8-hydroxypyrene-1,3,6-trisulfonate (pyranine), 8-aminopyrene-1,3,6-trisulfonate (APTS), 5,10,15,20-tetraphenyl-21H,23H-porphinetetrasulfonic acid disulfuric acid (TPPS), 2-naphthol-3,6-disulfonate (2NT)) and fluorescence excitation spectra of these dyes within TMAP-modified mesoporous silica were measured by varying the solution pH. The fluorescence experiments revealed that the acid-base equilibrium reactions of all pH indicator dyes within the TMAP-modified silica mesopore were quite different from those in bulk water. From the analysis of the acid-base equilibrium of pyranine, the following relationships between solution pH (pH(bulk)) and the effective proton concentration inside the pore (pH(pore)) were obtained: (1) shift of pH(pore) was 1.8 (ΔpH(pore)=1.8) for the pH(bulk) change from 2.1 to 9.1 (ΔpH(bulk)=7.0); (2) pH(pore) was not simply proportional to pH(bulk); (3) the inside of the TMAP-modified silica mesopore was suggested to be in a weak acidic or neutral condition when pH(bulk) was changed from 2.0 to 9.1. Since these relationships between pH(bulk) and pH(pore) could explain the acid-base equilibria of other pH indicator dyes (APTS, TPPS, 2NT), these relationships were inferred to describe the effective proton concentration inside the TMAP-modified silica mesopore. © 2011 American Chemical Society

  8. [Mesh structure of two-dimensional tumor microvascular architecture phenotype heterogeneity in non-small cell lung cancer].

    PubMed

    Xiong, Zeng; Zhou, Hui; Liu, Jin-Kang; Hu, Cheng-Ping; Zhou, Mo-Ling; Xia, Yu; Zhou, Jian-Hua

    2009-11-01

    To investigate the structural characteristics and clinical significance of two-dimensional tumor microvascular architecture phenotype (2D-TMAP) in non-small cell lung cancer (NSCLC). Thirty surgical specimens of NSCLC were collected. The sections of the tumor tissues corresponding to the slice of CT perfusion imaging were selected to construct the 2D-TMAP expression. Spearman correlation analysis was used to examine the relation between the 2D-TMAP expression and the clinicopathological features of NSCLC. A heterogeneity was noted in the 2D-TMAP expression of NSCLC. The microvascular density (MVD) in the area surrounding the tumor was higher than that in the central area, but the difference was not statistically significant. The density of the microvessels without intact lumen was significantly greater in the surrounding area than in the central area (P=0.030). The total MVD was not correlated to tumor differentiation (r=0.042, P=0.831). The density of the microvessels without intact lumen in the surrounding area was positively correlated to degree of tumor differentiation and lymph node metastasis (r=0.528 and 0.533, P=0.041 and 0.028, respectively), and also to the expressions of vascular endothelial growth factor (VEGF), ephrinB2, EphB4, and proliferating cell nuclear antigen (PCNA) (r=0.504, 0.549, 0.549, and 0.370; P=0.005, 0.002, 0.002, and 0.048, respectively). The degree of tumor differentiation was positively correlated to PCNA and VEGF expression (r=0.604 and 0.370, P=0.001 and 0.048, respectively), but inversely to the integrity of microvascular basement membrane (r=-0.531, P=0.033). The 2D-TMAP suggests the overall state of the micro-environment for tumor growth. The 2D-TMAP of NSCLC regulates angiogenesis and tumor cell proliferation through a mesh-like structure, and better understanding of the characteristics and possible mechanism of 2D-TMAP expression can be of great clinical importance.

  9. TMAP: A NEO follow-up program utilizing undergraduate observers

    NASA Astrophysics Data System (ADS)

    Ramirez, C.; Deaver, D.; Martinez, R.; Foster, J.; Kuang, L.; Ates, A.; Anderson, M.; Mijac, M.; Gillam, S.; Hicks, M. D.

    2000-10-01

    In the spring of 2000 we began TMAP (Table Mountain Astrometry Project), a program designed to provide timely astrometric followup of newly discovered near-Earth asteroids. Relying on undergraduate observers from the local California State Universities, we have to date been involved with the over 50 NEO and new comet discoveries. This is a significant fraction of all near-Earth asteroids discovered over the time period. All observations are performed at JPL's Table Mountain Facility near Wrightwood California using the 0.6-meter telescope equipped with a Photometrics LN cooled 1k CCD mounted at the cassegrain focus. With this system we can routinely detect objects to R=20.5. We have typically scheduled two runs per month on weekends bracketing the new moon. The student observers man the telescope are trained to select and obtain R-band images of candidates from the Minor Planet Center's NEO Confirmation Page (http://cfa-www.harvard.edu/cfa/ps/NEO/TheNEOPage.html). The astrometry is then reduced and submitted to the Minor Planet Center the following day. TMAP has proven to be an efficient way both to obtain much needed astrometric measurements of newly discovered small bodies as well as to involve undergraduate researchers in planetary research. The limiting magnitudes provided by the 0.6-meter partially fills the gap between the extremely helpful and dedicated amateur astromitrists and the followup that the NEO detection programs do themselves. This work is supported by NASA.

  10. A text message intervention for alcohol risk reduction among community college students: TMAP.

    PubMed

    Bock, Beth C; Barnett, Nancy P; Thind, Herpreet; Rosen, Rochelle; Walaska, Kristen; Traficante, Regina; Foster, Robert; Deutsch, Chris; Fava, Joseph L; Scott-Sheldon, Lori A J

    2016-12-01

    Students at community colleges comprise nearly half of all U.S. college students and show higher risk of heavy drinking and related consequences compared to students at 4-year colleges, but no alcohol safety programs currently target this population. To examine the feasibility, acceptability, and preliminary efficacy of an alcohol risk-reduction program delivered through text messaging designed for community college (CC) students. Heavy drinking adult CC students (N=60) were enrolled and randomly assigned to the six-week active intervention (Text Message Alcohol Program: TMAP) or a control condition of general motivational (not alcohol related) text messages. TMAP text messages consisted of alcohol facts, strategies to limit alcohol use and related risks, and motivational messages. Assessments were conducted at baseline, week 6 (end of treatment) and week 12 (follow up). Most participants (87%) completed all follow up assessments. Intervention messages received an average rating of 6.8 (SD=1.5) on a 10-point scale. At week six, TMAP participants were less likely than controls to report heavy drinking and negative alcohol consequences. The TMAP group also showed significant increases in self-efficacy to resist drinking in high risk situations between baseline and week six, with no such increase among controls. Results were maintained through the week 12 follow up. The TMAP alcohol risk reduction program was feasible and highly acceptable indicated by high retention rates through the final follow up assessment and good ratings for the text message content. Reductions in multiple outcomes provide positive indications of intervention efficacy. Copyright © 2016. Published by Elsevier Ltd.

  11. Adsorption characteristics of a cationic porphyrin on nanoclay at various pH.

    PubMed

    Rice, Zachary; Bergkvist, Magnus

    2009-07-15

    Natural and synthetic porphyrin derivatives offer a range of applications including enzymatic catalysis, photosensitizers for light harvesting and chemical reactions, and molecular electronics. They exhibit unique optical spectra dominated by the presence of Soret and Q-band structures whose position and shape offer a straightforward method to characterize porphyrins in various surroundings. In many applications it is often beneficial to have porphyrins adsorbed onto a solid matrix. Applications of porphyrin-clay complexes extend to numerous biological applications including pharmaceutical drug delivery, cosmetics, and agricultural applications and thus a full understanding of porphyrin-clay surface interactions are essential. Here we investigated the adsorption behavior of meso-tetra(4-N,N,N-trimethylanilinium) porphine (TMAP) onto sodium containing, natural montmorillonite clay (Cloisite Na(+)) in characteristic biological buffers over a range of pHs (approximately 2-9). Spectroscopic analyses show a linear absorption response at acidic and basic pHs but a slight deviation at intermediate pHs. Absorption spectra for TMAP on clay showed distinct red shifts of the Soret and Q-bands compared to free TMAP for all buffer conditions indicating core pi-electron delocalization into the substituent rings. At intermediate pHs, a gradual transition between protonated/deprotonated states were seen, presumably due to higher H(+) concentration at the surface than in bulk. Results indicate TMAP adsorption to clay occurs in a monolayer fashion at low/high pH while slightly acidic/neutral pH possibly rearrange on the surface and/or form aggregates. AFM images of clay saturated with TMAP are reported and show single isolated clay sheets without aggregation, similar to clay without TMAP.

  12. Cdk1-cyclin B1-mediated phosphorylation of tumor-associated microtubule-associated protein/cytoskeleton-associated protein 2 in mitosis.

    PubMed

    Hong, Kyung Uk; Kim, Hyun-Jun; Kim, Hyo-Sil; Seong, Yeon-Sun; Hong, Kyeong-Man; Bae, Chang-Dae; Park, Joobae

    2009-06-12

    During mitosis, establishment of structurally and functionally sound bipolar spindles is necessary for maintaining the fidelity of chromosome segregation. Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton-associated protein 2 (CKAP2), is a mitotic spindle-associated protein whose level is frequently up-regulated in various malignancies. Previous reports have suggested that TMAP is a potential regulator of mitotic spindle assembly and dynamics and that it is required for chromosome segregation to occur properly. So far, there have been no reports on how its mitosis-related functions are regulated. Here, we report that TMAP is hyper-phosphorylated at the C terminus specifically during mitosis. At least four different residues (Thr-578, Thr-596, Thr-622, and Ser-627) were responsible for the mitosis-specific phosphorylation of TMAP. Among these, Thr-622 was specifically phosphorylated by Cdk1-cyclin B1 both in vitro and in vivo. Interestingly, compared with the wild type, a phosphorylation-deficient mutant form of TMAP, in which Thr-622 had been replaced with an alanine (T622A), induced a significant increase in the frequency of metaphase cells with abnormal bipolar spindles, which often displayed disorganized, asymmetrical, or narrow and elongated morphologies. Formation of these abnormal bipolar spindles subsequently resulted in misalignment of metaphase chromosomes and ultimately caused a delay in the entry into anaphase. Moreover, such defects resulting from the T622A mutation were associated with a decrease in the rate of protein turnover at spindle microtubules. These findings suggest that Cdk1-cyclin B1-mediated phosphorylation of TMAP is important for and contributes to proper regulation of microtubule dynamics and establishment of functional bipolar spindles during mitosis.

  13. Cdk1-Cyclin B1-mediated Phosphorylation of Tumor-associated Microtubule-associated Protein/Cytoskeleton-associated Protein 2 in Mitosis*

    PubMed Central

    Uk Hong, Kyung; Kim, Hyun-Jun; Kim, Hyo-Sil; Seong, Yeon-Sun; Hong, Kyeong-Man; Bae, Chang-Dae; Park, Joobae

    2009-01-01

    During mitosis, establishment of structurally and functionally sound bipolar spindles is necessary for maintaining the fidelity of chromosome segregation. Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton-associated protein 2 (CKAP2), is a mitotic spindle-associated protein whose level is frequently up-regulated in various malignancies. Previous reports have suggested that TMAP is a potential regulator of mitotic spindle assembly and dynamics and that it is required for chromosome segregation to occur properly. So far, there have been no reports on how its mitosis-related functions are regulated. Here, we report that TMAP is hyper-phosphorylated at the C terminus specifically during mitosis. At least four different residues (Thr-578, Thr-596, Thr-622, and Ser-627) were responsible for the mitosis-specific phosphorylation of TMAP. Among these, Thr-622 was specifically phosphorylated by Cdk1-cyclin B1 both in vitro and in vivo. Interestingly, compared with the wild type, a phosphorylation-deficient mutant form of TMAP, in which Thr-622 had been replaced with an alanine (T622A), induced a significant increase in the frequency of metaphase cells with abnormal bipolar spindles, which often displayed disorganized, asymmetrical, or narrow and elongated morphologies. Formation of these abnormal bipolar spindles subsequently resulted in misalignment of metaphase chromosomes and ultimately caused a delay in the entry into anaphase. Moreover, such defects resulting from the T622A mutation were associated with a decrease in the rate of protein turnover at spindle microtubules. These findings suggest that Cdk1-cyclin B1-mediated phosphorylation of TMAP is important for and contributes to proper regulation of microtubule dynamics and establishment of functional bipolar spindles during mitosis. PMID:19369249

  14. Comparative study of the interaction of meso-tetrakis (N-para-trimethyl-anilium) porphyrin (TMAP) in its free base and Fe derivative form with oligo(dA.dT)15 and oligo(dG.dC)15.

    PubMed

    Bathaie, S Zahra; Ajloo, Davood; Daraie, Marzieh; Ghadamgahi, Maryam

    2015-01-01

    Interaction between a cationic porphyrin and its ferric derivative with oligo(dA.dT)15 and oligo(dG.dC)15 was studied by UV-vis spectroscopy, resonance light scattering (RLS), and circular dichroism (CD) at different ionic strengths; molecular docking and molecular dynamics simulation were also used for completion. Followings are the observed changes in the spectral properties of meso-tetrakis (N-para-trimethyl-anilium) porphyrin (TMAP), as a free-base porphyrin with no axial ligand, and its Fe derivative (FeTMAP) upon interaction with oligo(dA.dT)15 and oligo(dG.dC)15: (1) the substantial red shift and hypochromicity at the Soret maximum in the UV-vis spectra; (2) the increased RLS intensity by increasing the ionic strength; and (3) an intense bisignate excitonic CD signal. All of them are the reasons for TMAP and FeTMAP binding to oligo(dA.dT)15 and oligo(dG.dC)15 with the outside binding mode, accompanied by the self-stacking of the ligands along the oligonucleotide helix. The CD results demonstrated a drastic change from excitonic in monomeric behavior at higher ionic strengths, which indicates the groove binding of the ligands with oligonucleotides. Molecular docking also confirmed the groove binding mode of the ligands and estimated the binding constants and energies of the interactions. Their interaction trend was further confirmed by molecular dynamics technique and structure parameters obtained from simulation. It showed that TMAP reduced the number of intermolecular hydrogen bonds and increased the solvent accessible surface area in the oligonucleotide. The self-aggregation of ligands at lower concentrations was also confirmed.

  15. [Construction of 2-dimensional tumor microvascular architecture phenotype in non-small cell lung cancer].

    PubMed

    Liu, Jin-kang; Wang, Xiao-yi; Xiong, Zeng; Zhou, Hui; Zhou, Jian-hua; Fu, Chun-yan; Li, Bo

    2008-08-01

    To construct a technological platform of 2-dimensional tumor microvascular architecture phenotype (2D-TAMP) expression. Thirty samples of non-small cell lung cancer (NSCLC) were collected after surgery. The corresponding sections of tumor tissue specimens to the slice of CT perfusion imaging were selected. Immunohistochemical staining,Gomori methenamine silver stain, and electron microscope observation were performed to build a technological platform of 2D-TMAP expression by detecting the morphology and the integrity of basement membrane of microvasculature, microvascular density, various microvascular subtype, the degree of the maturity and lumenization of microvasculature, and the characteristics of immunogenetics of microvasculature. The technological platform of 2D-TMAP expression was constructed successfully. There was heterogeneity in 2D-TMAP expression of non-small cell lung cancer. The microvascular of NSCLC had certain characteristics. 2D-TMAP is a key technology that can be used to observe the overall state of micro-environment in tumor growth.

  16. Effects of divalent cations, EDTA and chitosan on the uptake and photoinactivation of Escherichia coli mediated by cationic and anionic porphyrins.

    PubMed

    Gsponer, Natalia S; Spesia, Mariana B; Durantini, Edgardo N

    2015-03-01

    The effect of divalent cations, EDTA and chitosan (CS) on the uptake and photoinactivation of Escherichia coli produced by 5,10,15,20-tetrakis(4-N,N,N-trimethylammoniumphenyl)porphyrin (TMAP(4+)), 5,10-di(4-methylphenyl)-15,20-di(4-N,N,N-trimethylammoniumphenyl)porphyrin (MPAP(2+)) and 5,10,15,20-tetra(4-sulphonatophenyl)porphyrin (TPPS(4-)) were examined under different conditions. These porphyrins were rapidly bound to E. coli cells (<2.5min) and the uptake of photosensitizers was not dependent on incubation temperature, reaching values of 0.61, 0.18 and 0.08nmol/10(8) cells for TMAP(4+), MPAP(2+) and TPPS(4-), respectively. The addition of Ca(2+) or Mg(2+) to the cultures enhanced the uptake of MPAP(2+) and TPPS(4-) by cells. In contrast, the amount of TMAP(4+) bound to cells was decreased. The presence of EDTA produced an increase in the uptake of porphyrins by cells, while CS mainly enhanced the amount of TPPS(4-) bound to E. coli. The photoinactivation of E. coli cells mediated by TMAP(4+) was highly effective even at low concentration (1μM) and short irradiation period (5min). However, a reduction in the phototoxicity was found for TMAP(4+) in presence of Ca(2+) and Mg(2+). In contrast, the phototoxic activity mediated by MPAP(2+) and TPPS(4-) was increased. Addition of EDTA did not show effect on the photoinactivation induced by cationic porphyrins, while a small enhance was found for TPPS(4-). Moreover, inactivation of E. coli cells was achieved in the presence CS. This cationic polymer was antimicrobial by itself in the dark. Using a slightly toxic CS concentration, the phototoxic activity induced by TMAP(4+) was diminished. This effect was mainly observed at lower concentration of TMAP(4+) (0.5-1μM). In contrast, an increase in E. coli photoinactivation was obtained for MPAP(2+) and TPPS(4-) in presence of CS. Thus, this natural polymeric destabilizer agent mainly benefited the photoinactivation mediated by TPPS(4-). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. NLTE Model Atmospheres for Super-Soft X-ray Sources

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas; Werner, Klaus

    2009-09-01

    Spectral analysis by means of fully line-blanketed Non-LTE model atmospheres has arrived at a high level of sophistication. The Tübingen NLTE Model Atmosphere Package (TMAP) is used to calculate plane-parallel NLTE model atmospheres which are in radiative and hydrostatic equilibrium. Although TMAP is not especially designed for the calculation of burst spectra of novae, spectral energy distributions (SEDs) calculated from TMAP models are well suited e.g. for abundance determinations of Super Soft X-ray Sources like nova V4743 Sgr or line identifications in observations of neutron stars with low magnetic fields in low-mass X-ray binaries (LMXBs) like EXO 0748-676.

  18. Timed activity performance in persons with upper limb amputation: A preliminary study.

    PubMed

    Resnik, Linda; Borgia, Mathew; Acluche, Frantzy

    55 subjects with upper limb amputation were administered the T-MAP twice within one week. To develop a timed measure of activity performance for persons with upper limb amputation (T-MAP); examine the measure's internal consistency, test-retest reliability and validity; and compare scores by prosthesis use. Measures of activity performance for persons with upper limb amputation are needed The time required to perform daily activities is a meaningful metric that implication for participation in life roles. Internal consistency and test-retest reliability were evaluated. Construct validity was examined by comparing scores by amputation level. Exploratory analyses compared sub-group scores, and examined correlations with other measures. Scale alpha was 0.77, ICC was 0.93. Timed scores differed by amputation level. Subjects using a prosthesis took longer to perform all tasks. T-MAP was not correlated with other measures of dexterity or activity, but was correlated with pain for non-prosthesis users. The timed scale had adequate internal consistency and excellent test-retest reliability. Analyses support reliability and construct validity of the T-MAP. 2c "outcomes" research. Published by Elsevier Inc.

  19. Genotoxic activity of 4,4',5'-trimethylazapsoralen on plasmid DNA.

    PubMed

    Lagatolla, C; Dolzani, L; Granzotto, M; Monti-Bragadin, C

    1998-01-01

    The genotoxic activities of 8-methoxypsoralen (8-MOP) and 4,4',5'-trimethylazapsoralen (4,4',5'-TMAP) on plasmid DNA have been compared. In a previous work, 4,4',5'-TMAP, a methyl derivative of a psoralen isoster, had shown potential photochemotherapeutic activity. The mutagenic activity of mono- and bifunctional lesions caused by these compounds was evaluated both after UVA irradiation, which causes the formation of both kinds of lesions, and after a two-step irradiation procedure of the psoralen-plasmid DNA complex, which allowed monoadducts and interstrand crosslinks to be studied separately. Furthermore, we used a procedure that allowed us to evaluate both the mutagenic and recombinogenic activity of the two compounds. Results indicate that the most important difference between 8-MOP and 4,4',5'-TMAP consists in their mode of photoreaction with DNA rather than in their mutagenic potential. In fact, in all of the experimental procedures, 4,4',5'-TMAP shows a lower ability than 8-MOP to generate interstrand crosslinks. However, when comparable toxicity levels are reached, the two compounds show the same mutagenic potentiality.

  20. Effects of a new bifunctional psoralen, 4,4',5'-trimethylazapsoralen and ultraviolet-A radiation on murine dendritic epidermal cells.

    PubMed

    Aubin, F; Alcalay, J; Dall'Acqua, F; Kripke, M L

    1990-06-01

    Although some psoralens are therapeutically active in the treatment of cutaneous hyperproliferative diseases when combined with UVA (320-400 nm) radiation, the toxic effects of these compounds have led physicians to seek new photochemotherapeutic agents. One such agent is 4,4',5'-trimethylazapsoralen (TMAP), a new bifunctional psoralen compound. We investigated the effects of repetitive treatments with TMAP plus UVA radiation on the number of dendritic immune cells in murine epidermis and on the induction of phototoxicity. Mice treated 3 times per week for 4 weeks with 129 microgram TMAP plus 10 kJ/m2 UVA radiation exhibited no gross or microscopic evidence of phototoxicity. During this treatment, the numbers of ATPase+, Ia+, and Thy-l+ dendritic epidermal cells were greatly reduced, and by the end of the treatment period, few dendritic immune cells could be detected. We conclude that morphological alterations of cutaneous immune cells can occur in the absence of overt phototoxicity, and that TMAP plus low-dose UVA radiation decreases the numbers of detectable Langerhans cells and Thy-1+ cells in murine skin.

  1. Characterization of mitosis-specific phosphorylation of tumor-associated microtubule-associated protein.

    PubMed

    Hong, Kyung Uk; Kim, Hyun-Jun; Bae, Chang-Dae; Park, Joobae

    2009-11-30

    Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton associated protein 2 (CKAP2), has been recently shown to be involved in the assembly and maintenance of mitotic spindle and also plays an essential role in maintaining the fidelity of chromosome segregation during mitosis. We have previously reported that TMAP is phosphorylated at multiple residues specifically during mitosis, and characterized the mechanism and functional importance of phosphorylation at one of the mitosis-specific phosphorylation residues (i.e., Thr-622). However, the phosphorylation events at the remaining mitotic phosphorylation sites of TMAP have not been fully characterized in detail. Here, we report on generation and characterization of phosphorylated Thr-578- and phosphorylated Thr-596-specific antibodies. Using the antibodies, we show that phosphorylation of TMAP at Thr-578 and Thr-596 indeed occurs specifically during mitosis. Immunofluorescent staining using the antibodies shows that these residues become phosphorylated starting at prophase and then become rapidly dephosphorylated soon after initiation of anaphase. Subtle differences in the kinetics of phosphorylation between Thr-578 and Thr-596 imply that they may be under different mechanisms of phosphorylation during mitosis. Unlike the phosphorylation-deficient mutant form for Thr-622, the mutant in which both Thr-578 and Thr-596 had been mutated to alanines did not induce significant delay in progression of mitosis. These results show that the majority of mitosis-specific phosphorylation of TMAP is limited to pre-anaphase stages and suggest that the multiple phosphorylation may not act in concert but serve diverse functions.

  2. Model-Atmosphere Spectra of Central Stars of Planetary Nebulae - Access via the Virtual Observatory Service TheoSSA

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Reindl, N.

    2014-04-01

    In the framework of the Virtual Observatory (VO), the German Astrophysical Virtual Observatory GAVO project provides easy access to theoretical spectral energy distributions (SEDs) within the registered GAVO service TheoSSA (http://dc.g-vo.org/theossa). TheoSSA is based on the well established Tübingen NLTE Model-Atmosphere Package (TMAP) for hot, compact stars. This includes central stars of planetary nebulae. We show examples of TheoSSA in operation.

  3. The monophasic action potential upstroke: a means of characterizing local conduction.

    PubMed

    Levine, J H; Moore, E N; Kadish, A H; Guarnieri, T; Spear, J F

    1986-11-01

    The upstrokes of monophasic action potentials (MAPs) recorded with an extracellular pressure electrode were characterized in isolated canine tissue preparations in vitro. The characteristics of the MAP upstroke were compared with those of the local action potential foot as well as with the characteristics of approaching electrical activation during uniform and asynchronous conduction. The upstroke of the MAP was exponential during uniform conduction. The time constant of rise of the MAP upstroke (TMAP) correlated with that of the action potential foot (Tfoot): TMAP + 1.01 Tfoot + 0.50; r2 = .80. Furthermore, changes in Tfoot with alterations in cycle length were associated with similar changes in TMAP: Tfoot = 1.06 TMAP - 0.11; r2 = .78. In addition, TMAP and Tfoot both deviated from exponential during asynchronous activation; the inflections that developed in the MAP upstroke correlated in time with intracellular action potential upstrokes that were asynchronous in onset in these tissues. Finally, the field of view of the MAP was determined and was found to be dependent in part on tissue architecture and the space constant. Specifically, the field of view of the MAP was found to be greater parallel compared with transverse to fiber orientation (6.02 +/- 1.74 vs 3.03 +/- 1.10 mm; p less than .01). These data suggest that the MAP upstroke may be used to define and characterize local electrical activation. The relatively large field of view of the MAP suggests that this technique may be a sensitive means to record focal membrane phenomena in vivo.

  4. Characterization of mitosis-specific phosphorylation of tumor-associated microtubule-associated protein

    PubMed Central

    Hong, Kyung Uk; Kim, Hyun-Jun

    2009-01-01

    Tumor-associated microtubule-associated protein (TMAP), also known as cytoskeleton associated protein 2 (CKAP2), has been recently shown to be involved in the assembly and maintenance of mitotic spindle and also plays an essential role in maintaining the fidelity of chromosome segregation during mitosis. We have previously reported that TMAP is phosphorylated at multiple residues specifically during mitosis, and characterized the mechanism and functional importance of phosphorylation at one of the mitosis-specific phosphorylation residues (i.e., Thr-622). However, the phosphorylation events at the remaining mitotic phosphorylation sites of TMAP have not been fully characterized in detail. Here, we report on generation and characterization of phosphorylated Thr-578- and phosphorylated Thr-596-specific antibodies. Using the antibodies, we show that phosphorylation of TMAP at Thr-578 and Thr-596 indeed occurs specifically during mitosis. Immunofluorescent staining using the antibodies shows that these residues become phosphorylated starting at prophase and then become rapidly dephosphorylated soon after initiation of anaphase. Subtle differences in the kinetics of phosphorylation between Thr-578 and Thr-596 imply that they may be under different mechanisms of phosphorylation during mitosis. Unlike the phosphorylation-deficient mutant form for Thr-622, the mutant in which both Thr-578 and Thr-596 had been mutated to alanines did not induce significant delay in progression of mitosis. These results show that the majority of mitosis-specific phosphorylation of TMAP is limited to pre-anaphase stages and suggest that the multiple phosphorylation may not act in concert but serve diverse functions. PMID:19641375

  5. Effect of spatial smoothing on t-maps: arguments for going back from t-maps to masked contrast images.

    PubMed

    Reimold, Matthias; Slifstein, Mark; Heinz, Andreas; Mueller-Schauenburg, Wolfgang; Bares, Roland

    2006-06-01

    Voxelwise statistical analysis has become popular in explorative functional brain mapping with fMRI or PET. Usually, results are presented as voxelwise levels of significance (t-maps), and for clusters that survive correction for multiple testing the coordinates of the maximum t-value are reported. Before calculating a voxelwise statistical test, spatial smoothing is required to achieve a reasonable statistical power. Little attention is being given to the fact that smoothing has a nonlinear effect on the voxel variances and thus the local characteristics of a t-map, which becomes most evident after smoothing over different types of tissue. We investigated the related artifacts, for example, white matter peaks whose position depend on the relative variance (variance over contrast) of the surrounding regions, and suggest improving spatial precision with 'masked contrast images': color-codes are attributed to the voxelwise contrast, and significant clusters (e.g., detected with statistical parametric mapping, SPM) are enlarged by including contiguous pixels with a contrast above the mean contrast in the original cluster, provided they satisfy P < 0.05. The potential benefit is demonstrated with simulations and data from a [11C]Carfentanil PET study. We conclude that spatial smoothing may lead to critical, sometimes-counterintuitive artifacts in t-maps, especially in subcortical brain regions. If significant clusters are detected, for example, with SPM, the suggested method is one way to improve spatial precision and may give the investigator a more direct sense of the underlying data. Its simplicity and the fact that no further assumptions are needed make it a useful complement for standard methods of statistical mapping.

  6. Crossover and maximal fat-oxidation points in sedentary healthy subjects: methodological issues.

    PubMed

    Gmada, N; Marzouki, H; Haboubi, M; Tabka, Z; Shephard, R J; Bouhlel, E

    2012-02-01

    Our study aimed to assess the influence of protocol on the crossover point and maximal fat-oxidation (LIPOX(max)) values in sedentary, but otherwise healthy, young men. Maximal oxygen intake was assessed in 23 subjects, using a progressive maximal cycle ergometer test. Twelve sedentary males (aged 20.5±1.0 years) whose directly measured maximal aerobic power (MAP) values were lower than their theoretical maximal values (tMAP) were selected from this group. These individuals performed, in random sequence, three submaximal graded exercise tests, separated by three-day intervals; work rates were based on the tMAP in one test and on MAP in the remaining two. The third test was used to assess the reliability of data. Heart rate, respiratory parameters, blood lactate, the crossover point and LIPOX(max) values were measured during each of these tests. The crossover point and LIPOX(max) values were significantly lower when the testing protocol was based on tMAP rather than on MAP (P<0.001). Respiratory exchange ratios were significantly lower with MAP than with tMAP at 30, 40, 50 and 60% of maximal aerobic power (P<0.01). At the crossover point, lactate and 5-min postexercise oxygen consumption (EPOC(5 min)) values were significantly higher using tMAP rather than MAP (P<0.001). During the first 5 min of recovery, EPOC(5 min) and blood lactate were significantly correlated (r=0.89; P<0.001). Our data show that, to assess the crossover point and LIPOX(max) values for research purposes, the protocol must be based on the measured MAP rather than on a theoretical value. Such a determination should improve individualization of training for initially sedentary subjects. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. [Evaluation of three-dimensional tumor microvascular architecture phenotype heterogeneity in non-small cell carcinoma and its significance].

    PubMed

    Zhou, Hui; Liu, Jinkang; Chen, Shengxi; Xiong, Zeng; Zhou, Jianhua; Tong, Shiyu; Chen, Hao; Zhou, Moling

    2012-06-01

    To explore the degree, mechanism and clinical significance of three-dimensional tumor microvascular architecture phenotype heterogeneity (3D-TMAPH) in non-small cell carcinoma (NSCLC). Twenty-one samples of solitary pulmonary nodules were collected integrally. To establish two-dimensional tumor microvascular architecture phenotype (2D-TMAP) and three-dimensional tumor microvascular architecture phenotype (3D-TMAP), five layers of each nodule were selected and embedded in paraffin. Test indices included the expressions of vascular endothelial growth factor (VEGF), proliferating cell nuclear antigen (PCNA), EphB4, ephfinB2 and microvascular density marked by anti-CD34 (CD34-MVD). The degrees of 3D-TMAPH were evaluated by the coefficient of variation and extend of heterogeneity. Spearman rank correlation analysis was used to investigate the relationships between 2D-TMAP, 3D-TMAP and clinicopathological features. 3D-TMAPH showed that 2D-TMAP heterogeneity was expressed in the tissues of NSCLC. The heterogeneities in the malignant nodules were significantly higher than those in the active inflammatory nodules and tubercular nodules. In addition, different degrees of heterogeneity of CD34-MVD and PCNA were found in NSCLC tissues. The coefficients of variation of CD34- MVD and PCNA were positively related to the degree of differentiation (all P<0.05), but not related to the P-TNM stages, histological type or lymphatic metastasis (all P>0.05). The level of heterogeneity of various expression indexes (ephrinB2, EphB4, VEGF) in NSCLC tissues were inconsistent, but there were no significant differences in heterogeneity in NSCLC tissues with different histological types (P>0.05). 3D-TMAPH exists widely in the microenvironment during the genesis and development of NSCLC and has a significant impact on its biological complexity.

  8. Mathematical analysis of running performance and world running records.

    PubMed

    Péronnet, F; Thibault, G

    1989-07-01

    The objective of this study was to develop an empirical model relating human running performance to some characteristics of metabolic energy-yielding processes using A, the capacity of anaerobic metabolism (J/kg); MAP, the maximal aerobic power (W/kg); and E, the reduction in peak aerobic power with the natural logarithm of race duration T, when T greater than TMAP = 420 s. Accordingly, the model developed describes the average power output PT (W/kg) sustained over any T as PT = [S/T(1 - e-T/k2)] + 1/T integral of T O [BMR + B(1 - e-t/k1)]dt where S = A and B = MAP - BMR (basal metabolic rate) when T less than TMAP; and S = A + [Af ln(T/TMAP)] and B = (MAP - BMR) + [E ln(T/TMAP)] when T greater than TMAP; k1 = 30 s and k2 = 20 s are time constants describing the kinetics of aerobic and anaerobic metabolism, respectively, at the beginning of exercise; f is a constant describing the reduction in the amount of energy provided from anaerobic metabolism with increasing T; and t is the time from the onset of the race. This model accurately estimates actual power outputs sustained over a wide range of events, e.g., average absolute error between actual and estimated T for men's 1987 world records from 60 m to the marathon = 0.73%. In addition, satisfactory estimations of the metabolic characteristics of world-class male runners were made as follows: A = 1,658 J/kg; MAP = 83.5 ml O2.kg-1.min-1; 83.5% MAP sustained over the marathon distance. Application of the model to analysis of the evolution of A, MAP, and E, and of the progression of men's and women's world records over the years, is presented.

  9. [Comparative evaluation of clinical practice guidelines for the treatment of schizophrenia].

    PubMed

    Delessert, D; Pomini, V; Grasset, F; Baumann, P

    2008-01-01

    Many clinical practice guidelines (CPG) have been published in reply to the development of the concept of "evidence-based medicine" (EBM) and as a solution to the difficulty of synthesizing and selecting relevant medical literature. Taking into account the expansion of new CPG, the question of choice arises: which CPG to consider in a given clinical situation? It is of primary importance to evaluate the quality of the CPG, but until recently, there has been no standardized tool of evaluation or comparison of the quality of the CPG. An instrument of evaluation of the quality of the CPG, called "AGREE" for appraisal of guidelines for research and evaluation was validated in 2002. The six principal CPG concerning the treatment of schizophrenia are compared with the help of the "AGREE" instrument: (1) "the Agence nationale pour le développement de l'évaluation médicale (ANDEM) recommendations"; (2) "The American Psychiatric Association (APA) practice guideline for the treatment of patients with schizophrenia"; (3) "The quick reference guide of APA practice guideline for the treatment of patients with schizophrenia"; (4) "The schizophrenia patient outcomes research team (PORT) treatment recommendations"; (5) "The Texas medication algorithm project (T-MAP)" and (6) "The expert consensus guideline for the treatment of schizophrenia". The results of our study were then compared with those of a similar investigation published in 2005, structured on 24 CPG tackling the treatment of schizophrenia. The "AGREE" tool was also used by two investigators in their study. In general, the scores of the two studies differed little and the two global evaluations of the CPG converged; however, each of the six CPG is perfectible. The rigour of elaboration of the six CPG was in general average. The consideration of the opinion of potential users was incomplete, and an effort made in the presentation of the recommendations would facilitate their clinical use. Moreover, there was little consideration by the authors regarding the applicability of the recommendations. Globally, two CPG are considered as strongly recommended: "the quick reference guide of the APA practice guideline for the treatment of patients with schizophrenia" and "the T-MAP".

  10. Speech Recognition Using Neural Nets and Dynamic Time Warping

    DTIC Science & Technology

    1988-12-01

    flost tmap [~J20J0[16J; mnt r, Ci: double diet ; double minimum =99M9.9; for (r =0; r < fysize ; ri-i-){ for (c =0; c f xeize ; c+ +){ diet 6 .0; for...i)[1] = (location[iflO] 0); Lmindist (I map, inp, close) double inp[16J; mrt close [2] float tmap [20(201[161 double diet ; double minimum 9.99e31

  11. Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design.

    PubMed

    Rush, A John; Crismon, M Lynn; Kashner, T Michael; Toprac, Marcia G; Carmody, Thomas J; Trivedi, Madhukar H; Suppes, Trisha; Miller, Alexander L; Biggs, Melanie M; Shores-Wilson, Kathy; Witte, Bradley P; Shon, Steven P; Rago, William V; Altshuler, Kenneth Z

    2003-04-01

    Medication treatment algorithms may improve clinical outcomes, uniformity of treatment, quality of care, and efficiency. However, such benefits have never been evaluated for patients with severe, persistent mental illnesses. This study compared clinical and economic outcomes of an algorithm-driven disease management program (ALGO) with treatment-as-usual (TAU) for adults with DSM-IV schizophrenia (SCZ), bipolar disorder (BD), and major depressive disorder (MDD) treated in public mental health outpatient clinics in Texas. The disorder-specific intervention ALGO included a consensually derived and feasibility-tested medication algorithm, a patient/family educational program, ongoing physician training and consultation, a uniform medical documentation system with routine assessment of symptoms and side effects at each clinic visit to guide ALGO implementation, and prompting by on-site clinical coordinators. A total of 19 clinics from 7 local authorities were matched by authority and urban status, such that 4 clinics each offered ALGO for only 1 disorder (SCZ, BD, or MDD). The remaining 7 TAU clinics offered no ALGO and thus served as controls (TAUnonALGO). To determine if ALGO for one disorder impacted care for another disorder within the same clinic ("culture effect"), additional TAU subjects were selected from 4 of the ALGO clinics offering ALGO for another disorder (TAUinALGO). Patient entry occurred over 13 months, beginning March 1998 and concluding with the final active patient visit in April 2000. Research outcomes assessed at baseline and periodically for at least 1 year included (1) symptoms, (2) functioning, (3) cognitive functioning (for SCZ), (4) medication side effects, (5) patient satisfaction, (6) physician satisfaction, (7) quality of life, (8) frequency of contacts with criminal justice and state welfare system, (9) mental health and medical service utilization and cost, and (10) alcohol and substance abuse and supplemental substance use information. Analyses were based on hierarchical linear models designed to test for initial changes and growth in differences between ALGO and TAU patients over time in this matched clinic design.

  12. Tissue Molecular Anatomy Project (TMAP): an expression database for comparative cancer proteomics.

    PubMed

    Medjahed, Djamel; Luke, Brian T; Tontesh, Tawady S; Smythers, Gary W; Munroe, David J; Lemkin, Peter F

    2003-08-01

    By mining publicly accessible databases, we have developed a collection of tissue-specific predictive protein expression maps as a function of cancer histological state. Data analysis is applied to the differential expression of gene products in pooled libraries from the normal to the altered state(s). We wish to report the initial results of our survey across different tissues and explore the extent to which this comparative approach may help uncover panels of potential biomarkers of tumorigenesis which would warrant further examination in the laboratory.

  13. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  14. 4,4',5'-trimethyl-8-azapsoralen, a new-photoreactive and non-skin-phototoxic bifunctional bioisoster of psoralen.

    PubMed

    Vedaldi, D; Dall'Acqua, F; Caffieri, S; Baccichetti, F; Carlassare, F; Bordin, F; Chilin, A; Guiotto, A

    1991-01-01

    Photochemical and photobiological properties of a new isoster of psoralen, 4,4',5'-trimethyl-8-azapsoralen (4,4',5'-TMAP), have been studied. This compound shows a high DNA-photobinding rate, higher than that of 8-methoxypsoralen (8-MOP), forming both monoadducts and inter-strand cross-links. The yield of cross-links, however, is markedly lower than that of 8-MOP. Antiproliferative activity of 4,4',5'-TMAP, in terms of DNA synthesis inhibition in Ehrlich ascites tumor cells, is higher than that of 8-MOP. Mutagenic activity on E. coli WP2 R46+ cells appeared similar to or even lower than that of 8-MOP. This new compound applied on depilated guinea pig skin and irradiated with UVA did not show any skin-phototoxicity. On the basis of these properties 4,4',5'-TMAP appears to be a potential photochemotherapeutic agent.

  15. Measurement-based care for refractory depression: a clinical decision support model for clinical research and practice.

    PubMed

    Trivedi, Madhukar H; Daly, Ella J

    2007-05-01

    Despite years of antidepressant drug development and patient and provider education, suboptimal medication dosing and duration of exposure resulting in incomplete remission of symptoms remains the norm in the treatment of depression. Additionally, since no one treatment is effective for all patients, optimal implementation focusing on the measurement of symptoms, side effects, and function is essential to determine effective sequential treatment approaches. There is a need for a paradigm shift in how clinical decision making is incorporated into clinical practice and for a move away from the trial-and-error approach that currently determines the "next best" treatment. This paper describes how our experience with the Texas Medication Algorithm Project (TMAP) and the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial has confirmed the need for easy-to-use clinical support systems to ensure fidelity to guidelines. To further enhance guideline fidelity, we have developed an electronic decision support system that provides critical feedback and guidance at the point of patient care. We believe that a measurement-based care (MBC) approach is essential to any decision support system, allowing physicians to individualize and adapt decisions about patient care based on symptom progress, tolerability of medication, and dose optimization. We also believe that successful integration of sequential algorithms with MBC into real-world clinics will facilitate change that will endure and improve patient outcomes. Although we use major depression to illustrate our approach, the issues addressed are applicable to other chronic psychiatric conditions including comorbid depression and substance use disorder as well as other medical illnesses.

  16. Measurement-Based Care for Refractory Depression: A Clinical Decision Support Model for Clinical Research and Practice

    PubMed Central

    Trivedi, Madhukar H.; Daly, Ella J.

    2009-01-01

    Despite years of antidepressant drug development and patient and provider education, suboptimal medication dosing and duration of exposure resulting in incomplete remission of symptoms remains the norm in the treatment of depression. Additionally, since no one treatment is effective for all patients, optimal implementation focusing on the measurement of symptoms, side effects, and function is essential to determine effective sequential treatment approaches. There is a need for a paradigm shift in how clinical decision making is incorporated into clinical practice and for a move away from the trial-and-error approach that currently determines the “next best” treatment. This paper describes how our experience with the Texas Medication Algorithm Project (TMAP) and the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial has confirmed the need for easy-to-use clinical support systems to ensure fidelity to guidelines. To further enhance guideline fidelity, we have developed an electronic decision support system that provides critical feedback and guidance at the point of patient care. We believe that a measurement-based care (MBC) approach is essential to any decision support system, allowing physicians to individualize and adapt decisions about patient care based on symptom progress, tolerability of medication, and dose optimization. We also believe that successful integration of sequential algorithms with MBC into real-world clinics will facilitate change that will endure and improve patient outcomes. Although we use major depression to illustrate our approach, the issues addressed are applicable to other chronic psychiatric conditions including comorbid depression and substance use disorder as well as other medical illnesses. PMID:17320312

  17. Development of a computerized assessment of clinician adherence to a treatment guideline for patients with bipolar disorder.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; John Rush, A; Lynn Crismon, M; Witte, B; Webster, J

    2004-01-01

    The adoption of treatment guidelines for complex psychiatric illness is increasing. Treatment decisions in psychiatry depend on a number of variables, including severity of symptoms, past treatment history, patient preferences, medication tolerability, and clinical response. While patient outcomes may be improved by the use of treatment guidelines, there is no agreed upon standard by which to assess the degree to which clinician behavior corresponds to those recommendations. This report presents a method to assess clinician adherence to the complex multidimensional treatment guideline for bipolar disorder utilized in the Texas Medication Algorithm Project. The steps involved in the development of this system are presented, including the reliance on standardized documentation, defining core variables of interest, selecting criteria for operationalization of those variables, and computerization of the assessment of adherence. The computerized assessment represents an improvement over other assessment methods, which have relied on laborious and costly chart reviews to extract clinical information and to analyze provider behavior. However, it is limited by the specificity of decisions that guided the adherence scoring process. Preliminary findings using this system with 2035 clinical visits conducted for the bipolar disorder module of TMAP Phase 3 are presented. These data indicate that this system of guideline adherence monitoring is feasible.

  18. Development of the Brief Bipolar Disorder Symptom Scale for patients with bipolar disorder.

    PubMed

    Dennehy, Ellen B; Suppes, Trisha; Crismon, M Lynn; Toprac, Marcia; Carmody, Thomas J; Rush, A John

    2004-06-30

    The Brief Bipolar Disorder Symptom Scale (BDSS) is a 10-item measure of symptom severity that was derived from the 24-item Brief Psychiatric Rating Scale (BPRS24). It was developed for clinical use in settings where systematic evaluation is desired within the constraints of a brief visit. The psychometric properties of the BDSS were evaluated in 409 adult outpatients recruited from 19 clinics within the public mental health system of Texas, as part of the Texas Medication Algorithm Project (TMAP). The selection process for individual items is discussed in detail, and was based on multiple analyses, including principal components analysis with varimax rotation. Selection of the final items considered the statistical strength and factor loading of items within each of those factors as well as the need for comprehensive coverage of critical symptoms of bipolar disorder. The BDSS demonstrated good psychometric properties in this preliminary investigation. It demonstrated a strong association with the BPRS24 and performed similarly to the BPRS24 in its relationship to other symptom measures. The BDSS demonstrated superior sensitivity to symptom change, and an excellent level of agreement for classification of patients as either responders or non-responders with the BPRS24. Copyright 2004 Elsevier Ireland Ltd.

  19. Indium tin oxide with zwitterionic interfacial design for biosensing applications in complex matrices

    NASA Astrophysics Data System (ADS)

    Darwish, Nadia T.; Alias, Yatimah; Khor, Sook Mei

    2015-01-01

    Biosensing interfaces consisting of linker molecules (COOH or NH2) and charged, antifouling moieties ((sbnd SO3- and N+(Me)3) for biosensing applications were prepared for the first time by the in situ deposition of mixtures of aryl diazonium cations on indium tin oxide (ITO) electrodes. A linker molecule is required for the attachment of biorecognition molecules (e.g., antibodies, enzymes, DNA chains, and aptamers) close to the transducer surface. The attached molecules improve the biosensing sensitivity and also provide a short response time for analyte detection. Thus, the incorporation of a linker and antifouling molecules is an important interfacial design for both affinity and enzymatic biosensors. The reductive adsorption behavior and electrochemical measurement were studied for (1) an individual compound and (2) a mixture of antifouling zwitterionic molecules together with linker molecules [combination 1: 4-sulfophenyl (SP), 4-trimethylammoniophenyl (TMAP), and 1,4-phenylenediamine (PPD); combination 2: 4-sulfophenyl (SP), 4-trimethylammoniophenyl (TMAP), and 4-aminobenzoic acid (PABA)] of aryl diazonium cations grafted onto an ITO electrode. The mixture ratios of SP:TMAP:PPD and SP:TMAP:PABA that provided the greatest resistance to non-specific protein adsorptions of bovine serum albumin labeled with fluorescein isothiocyanate (BSA-FITC) and cytochrome c labeled with rhodamine B isothiocyanate (RBITC-Cyt c) were determined by confocal laser scanning microscopy (CLSM). For the surface antifouling study, we used 2-[2-(2-methoxyethoxy) ethoxy]acetic acid (OEG) as a standard control because of its prominent antifouling properties. Surface compositions of combinations 1 and 2 were characterized using X-ray photoelectron spectroscopy (XPS). Field-emission scanning electron microscopy (FE-SEM) was used to characterize the morphology of the grafted films to confirm the even distribution between linker and antifouling molecules grafted onto the ITO surfaces. Combination 1 (SP:TMAP:PPD) with a ratio of 0.5:1.5:0.37 exhibited the best antifouling capability with respect to resisting the nonspecific adsorption of proteins.

  20. TMAP: Tübingen NLTE Model-Atmosphere Package

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; Dreizler, Stefan; Rauch, Thomas

    2012-12-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) is a tool to calculate stellar atmospheres in spherical or plane-parallel geometry in hydrostatic and radiative equilibrium allowing departures from local thermodynamic equilibrium (LTE) for the population of atomic levels. It is based on the Accelerated Lambda Iteration (ALI) method and is able to account for line blanketing by metals. All elements from hydrogen to nickel may be included in the calculation with model atoms which are tailored for the aims of the user.

  1. Looking for Cancer Clues in Publicly Accessible Databases

    PubMed Central

    Lemkin, Peter F.; Smythers, Gary W.; Munroe, David J.

    2004-01-01

    What started out as a mere attempt to tentatively identify proteins in experimental cancer-related 2D-PAGE maps developed into VIRTUAL2D, a web-accessible repository for theoretical pI/MW charts for 92 organisms. Using publicly available expression data, we developed a collection of tissue-specific plots based on differential gene expression between normal and diseased states. We use this comparative cancer proteomics knowledge base, known as the tissue molecular anatomy project (TMAP), to uncover threads of cancer markers common to several types of cancer and to relate this information to established biological pathways. PMID:18629065

  2. Looking for cancer clues in publicly accessible databases.

    PubMed

    Medjahed, Djamel; Lemkin, Peter F; Smythers, Gary W; Munroe, David J

    2004-01-01

    What started out as a mere attempt to tentatively identify proteins in experimental cancer-related 2D-PAGE maps developed into VIRTUAL2D, a web-accessible repository for theoretical pI/MW charts for 92 organisms. Using publicly available expression data, we developed a collection of tissue-specific plots based on differential gene expression between normal and diseased states. We use this comparative cancer proteomics knowledge base, known as the tissue molecular anatomy project (TMAP), to uncover threads of cancer markers common to several types of cancer and to relate this information to established biological pathways.

  3. Photodynamic Therapy of the Murine LM3 Tumor Using Meso-Tetra (4-N,N,N-Trimethylanilinium) Porphine.

    PubMed

    Colombo, L L; Juarranz, A; Cañete, M; Villanueva, A; Stockert, J C

    2007-12-01

    Photodynamic therapy (PDT) of cancer is based on the cytotoxicity induced by a photosensitizer in the presence of oxygen and visible light, resulting in cell death and tumor regression. This work describes the response of the murine LM3 tumor to PDT using meso-tetra (4-N,N,N-trimethylanilinium) porphine (TMAP). BALB/c mice with intradermal LM3 tumors were subjected to intravenous injection of TMAP (4 mg/kg) followed 24 h later by blue-red light irradiation (λmax: 419, 457, 650 nm) for 60 min (total dose: 290 J/cm(2)) on depilated and glycerol-covered skin over the tumor of anesthetized animals. Control (drug alone, light alone) and PDT treatments (drug + light) were performed once and repeated 48 h later. No significant differences were found between untreated tumors and tumors only treated with TMAP or light. PDT-treated tumors showed almost total but transitory tumor regression (from 3 mm to less than 1 mm) in 8/9 animals, whereas no regression was found in 1/9. PDT response was heterogeneous and each tumor showed different regression and growth delay. The survival of PDT-treated animals was significantly higher than that of TMAP and light controls, showing a lower number of lung metastasis but increased tumor-draining lymph node metastasis. Repeated treatment and reduction of tissue light scattering by glycerol could be useful approaches in studies on PDT of cancer.

  4. Photodynamic Therapy of the Murine LM3 Tumor Using Meso-Tetra (4-N,N,N-Trimethylanilinium) Porphine

    PubMed Central

    Colombo, L. L.; Juarranz, A.; Cañete, M.; Villanueva, A.; Stockert, J. C.

    2007-01-01

    Photodynamic therapy (PDT) of cancer is based on the cytotoxicity induced by a photosensitizer in the presence of oxygen and visible light, resulting in cell death and tumor regression. This work describes the response of the murine LM3 tumor to PDT using meso-tetra (4-N,N,N-trimethylanilinium) porphine (TMAP). BALB/c mice with intradermal LM3 tumors were subjected to intravenous injection of TMAP (4 mg/kg) followed 24 h later by blue-red light irradiation (λmax: 419, 457, 650 nm) for 60 min (total dose: 290 J/cm2) on depilated and glycerol-covered skin over the tumor of anesthetized animals. Control (drug alone, light alone) and PDT treatments (drug + light) were performed once and repeated 48 h later. No significant differences were found between untreated tumors and tumors only treated with TMAP or light. PDT-treated tumors showed almost total but transitory tumor regression (from 3 mm to less than 1 mm) in 8/9 animals, whereas no regression was found in 1/9. PDT response was heterogeneous and each tumor showed different regression and growth delay. The survival of PDT-treated animals was significantly higher than that of TMAP and light controls, showing a lower number of lung metastasis but increased tumor-draining lymph node metastasis. Repeated treatment and reduction of tissue light scattering by glycerol could be useful approaches in studies on PDT of cancer. PMID:23675051

  5. TMAP-7 simulation of D2 thermal release data from Be co-deposited layers

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Schwarz-Selinger, T.; Yu, J. H.; Doerner, R. P.

    2013-07-01

    The efficacy of (1) bake-out at 513 K and 623 K, and (2) thermal transient (10 ms) loading to up to 1000 K, is explored for reducing D inventory in 1 μm thick Be-D (D/Be ˜0.1) co-deposited layers formed at 323 K for experiment (1) and ˜500 K for experiment (2). D release data from co-deposits are obtained by thermal desorption and used to validate a model input into the Tritium Migration & Analysis Program 7 (TMAP). In (1), good agreement with experiment is found for a TMAP model encorporating traps of activation energies, 0.80 eV and 0.98 eV, whereas an additional 2 eV trap was required to model experiment (2). Thermal release is found to be trap limited, but simulations are optimal when surface recombination is taken into account. Results suggest that thick built-up co-deposited layers will hinder ITER inventory control, and that bake periods (˜1 day) will be more effective in inventory reduction than transient thermal loading.

  6. OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data

    PubMed Central

    Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529

  7. Panchromatic Calibration of Astronomical Observations with State-of-the-Art White Dwarf Model Atmospheres

    NASA Astrophysics Data System (ADS)

    Rauch, T.

    2016-05-01

    Theoretical spectral energy distributions (SEDs) of white dwarfs provide a powerful tool for cross-calibration and sensitivity control of instruments from the far infrared to the X-ray energy range. Such SEDs can be calculated from fully metal-line blanketed NLTE model-atmospheres that are e.g. computed by the Tübingen NLTE Model-Atmosphere Package (TMAP) that has arrived at a high level of sophistication. TMAP was successfully employed for the reliable spectral analysis of many hot, compact post-AGB stars. High-quality stellar spectra obtained over a wide energy range establish a data base with a large number of spectral lines of many successive ions of different species. Their analysis allows to determine effective temperatures, surface gravities, and element abundances of individual (pre-)white dwarfs with very small error ranges. We present applications of TMAP SEDs for spectral analyses of hot, compact stars in the parameter range from (pre-) white dwarfs to neutron stars and demonstrate the improvement of flux calibration using white-dwarf SEDs that are e.g. available via registered services in the Virtual Observatory.

  8. OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.

    PubMed

    Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.

  9. A psychometric evaluation of the clinician-rated Quick Inventory of Depressive Symptomatology (QIDS-C16) in patients with bipolar disorder.

    PubMed

    Bernstein, Ira H; Rush, A John; Suppes, Trisha; Trivedi, Madhukar H; Woo, Ada; Kyutoku, Yasushi; Crismon, M Lynn; Dennehy, Ellen; Carmody, Thomas J

    2009-06-01

    The clinician-rated, 16-item Quick Inventory of Depressive Symptomatology (QIDS-C16) has been extensively evaluated in patients with major depressive disorder (MDD). This report assesses the psychometric properties of the QIDS-C16 in outpatients with bipolar disorder (BD, N = 405) and MDD (N = 547) and in bipolar patients in the depressed phase only (BD-D) (N = 99) enrolled in the Texas Medication Algorithm Project (TMAP) using classical test theory (CTT) and the Samejima graded item response theory (IRT) model. Values of coefficient alpha were very similar in BD, MDD, and BD-D groups at baseline (alpha = 0.80-0.81) and at exit (alpha = 0.82-0.85). The QIDS-C16 was unidimensional for all three groups. MDD and BD-D patients (n = 99) had comparable symptom levels. The BD-D patients (n = 99) had the most, and bipolar patients in the manic phase had the least depressive symptoms at baseline. IRT analyses indicated that the QIDS-C16 was most sensitive to the measurement of depression for both MDD patients and for BD-D patients in the average range. The QIDS-C16 is suitable for use with patients with BD and can be used as an outcome measure in trials enrolling both BD and MDD patients. John Wiley & Sons, Ltd

  10. Theoretical White Dwarf Spectra on Demand: TheoSSA

    NASA Astrophysics Data System (ADS)

    Ringat, E.; Rauch, T.

    2010-11-01

    In the last decades, a lot of progress was made in spectral analysis. The quality (e.g. resolution, S/N ratio) of observed spectra has improved much and several model-atmosphere codes were developed. One of these is the ``Tübingen NLTE Model-Atmosphere Package'' (TMAP), that is a highly developed program for the calculation of model atmospheres of hot, compact objects. In the framework of the German Astrophysical Virtual Observatory (GAVO), theoretical spectral energy distributions (SEDs) can be downloaded via TheoSSA. In a pilot phase, TheoSSA is based on TMAP model atmospheres. We present the current state of this VO service.

  11. How to Model Super-Soft X-ray Sources?

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  12. Non-LTE model atmospheres for supersoft X-ray sources

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Werner, K.

    2010-02-01

    In the last decade, X-ray observations of hot stellar objects became available with unprecedented resolution and S/N ratio. For an adequate interpretation, fully metal-line blanketed Non-LTE model-atmospheres are necessary. The Tübingen Non-LTE Model Atmosphere Package (TMAP) can calculate such model atmospheres at a high level of sophistication. Although TMAP is not especially designed for the calculation of spectral energy distributions (SEDs) at extreme photospheric parameters, it can be employed for the spectral analysis of burst spectra of novae like V4743 Sgr or line identifications in observations of neutron stars with low magnetic fields in low-mass X-ray binaries (LMXBs) like EXO 0748-676.

  13. Local suppression of contact hypersensitivity in mice by a new bifunctional psoralen, 4,4',5'-trimethylazapsoralen, and UVA radiation.

    PubMed

    Aubin, F; Dall'Acqua, F; Kripke, M L

    1991-07-01

    Although psoralens plus UVA radiation (320-400 nm) have been widely used for the treatment of dermatologic diseases, the toxic effects of these agents have led investigators to develop new photochemotherapeutic compounds. One such compound is 4,4',5'-trimethylazapsoralen (TMAP), a new bifunctional molecule. The purpose of this study was to examine the immunologic side effects of repeated treatment of C3H mice with TMAP plus UVA radiation. During this treatment, the number of ATPase+, la+, and Thy-1+ dendritic epidermal cells greatly decreased in the treated site, despite the lack of phototoxicity. The reduction in the number of detectable cutaneous immune cells was accompanied by a decrease in the induction of contact hypersensitivity to dinitrofluorobenzene applied to the treated skin, an impairment in the antigen-presenting activity of draining lymph node cells, and the presence of suppressor lymphoid cells in the spleen of unresponsive mice. Treatment with UVA radiation alone also reduced the number of ATPase+, Ia+, and Thy-1+ cells in the skin, but did not cause any detectable alterations in immune function. This implies that morphologic alterations in these cells do not necessarily indicate loss of function. Thus, although TMAP in combination with UVA radiation is not overtly phototoxic, it is highly immunosuppressive in mice.

  14. Predicting the transmembrane secondary structure of ligand-gated ion channels.

    PubMed

    Bertaccini, E; Trudell, J R

    2002-06-01

    Recent mutational analyses of ligand-gated ion channels (LGICs) have demonstrated a plausible site of anesthetic action within their transmembrane domains. Although there is a consensus that the transmembrane domain is formed from four membrane-spanning segments, the secondary structure of these segments is not known. We utilized 10 state-of-the-art bioinformatics techniques to predict the transmembrane topology of the tetrameric regions within six members of the LGIC family that are relevant to anesthetic action. They are the human forms of the GABA alpha 1 receptor, the glycine alpha 1 receptor, the 5HT3 serotonin receptor, the nicotinic AChR alpha 4 and alpha 7 receptors and the Torpedo nAChR alpha 1 receptor. The algorithms utilized were HMMTOP, TMHMM, TMPred, PHDhtm, DAS, TMFinder, SOSUI, TMAP, MEMSAT and TOPPred2. The resulting predictions were superimposed on to a multiple sequence alignment of the six amino acid sequences created using the CLUSTAL W algorithm. There was a clear statistical consensus for the presence of four alpha helices in those regions experimentally thought to span the membrane. The consensus of 10 topology prediction techniques supports the hypothesis that the transmembrane subunits of the LGICs are tetrameric bundles of alpha helices.

  15. Isolation of an oxomanganese(V) porphyrin intermediate in the reaction of a manganese(III) porphyrin complex and H2O2 in aqueous solution.

    PubMed

    Nam, Wonwoo; Kim, Inwoo; Lim, Mi Hee; Choi, Hye Jin; Lee, Je Seung; Jang, Ho G

    2002-05-03

    The reaction of [Mn(TF(4)TMAP)](CF(3)SO(3))(5) (TF(4)TMAP=meso-tetrakis(2,3,5,6-tetrafluoro-N,N,N-trimethyl-4-aniliniumyl)porphinato dianion) with H(2)O(2) (2 equiv) at pH 10.5 and 0 degrees C yielded an oxomanganese(V) porphyrin complex 1 in aqueous solution, whereas an oxomanganese(IV) porphyrin complex 2 was generated in the reactions of tert-alkyl hydroperoxides such as tert-butyl hydroperoxide and 2-methyl-1-phenyl-2-propyl hydroperoxide. Complex 1 was capable of epoxidizing olefins and exchanging its oxygen with H(2) (18)O, whereas 2 did not epoxidize olefins. From the reactions of [Mn(TF(4)TMAP)](5+) with various oxidants in the pH range 3-11, the O-O bond cleavage of hydroperoxides was found to be sensitive to the hydroperoxide substituent and the pH of the reaction solution. Whereas the O-O bond of hydroperoxides containing an electron-donating tert-alkyl group is cleaved homolytically, an electron-withdrawing substituent such as an acyl group in m-chloroperoxybenzoic acid (m-CPBA) facilitates O-O bond heterolysis. The mechanism of the O-O bond cleavage of H(2)O(2) depends on the pH of the reaction solution: O-O bond homolysis prevails at low pH and O-O bond heterolysis becomes a predominant pathway at high pH. The effect of pH on (18)O incorporation from H(2) (18)O into oxygenated products was examined over a wide pH range, by carrying out the epoxidation of carbamazepine (CBZ) with [Mn(TF(4)TMAP)](5+) and KHSO(5) in buffered H(2) (18)O solutions. A high proportion of (18)O was incorporated into the CBZ-10,11-oxide product at all pH values but this proportion was not affected significantly by the pH of the reaction solution.

  16. Metalation of positively charged water soluble mesoporphyrins studied via time-resolved SERRS spectroscopy

    NASA Astrophysics Data System (ADS)

    Procházka, Marek; Hanzliková, Jana; Štěpánek, Josef; Baumruk, Vladimir

    1997-06-01

    Time-resolved SERRS spectra of 5,10,15,20-tetrakis[4-(trimethylammonio)phenyl]21 H,23 H-porphine (TMAP) were recorded (using a multichannel Raman spectrometer) in various SERS-active Ag colloid/porphyrin systems. Data treatment based on a factor analysis was used to decompose all the SERRS spectra into two main components: SERRS spectrum of the free base TMAP and that of its Ag metalated form. The metalation kinetics obtained in this way was found to be highly dependent on the presence of phosphate anions, citrate and/or Triton X-100 in the colloidal system. The results are analogous to those previously obtained for 5,10,15,20-tetrakis(1-methyl-4-pyridyl)21 H,23 H-porphine, a porphyrin with a substantially stronger tendency towards metalation.

  17. Variation in breast cancer risk associated with factors related to pregnancies according to truncating mutation location, in the French National BRCA1 and BRCA2 mutations carrier cohort (GENEPSO)

    PubMed Central

    2012-01-01

    Introduction Mutations in BRCA1 and BRCA2 confer a high risk of breast cancer (BC), but the magnitude of this risk seems to vary according to the study and various factors. Although controversial, there are data to support the hypothesis of allelic risk heterogeneity. Methods We assessed variation in BC risk according to factors related to pregnancies by location of mutation in the homogeneous risk region of BRCA1 and BRCA2 in 990 women in the French study GENEPSO by using a weighted Cox regression model. Results Our results confirm the existence of the protective effect of an increasing number of full-term pregnancies (FTPs) toward BC among BRCA1 and BRCA2 mutation carriers (≥3 versus 0 FTPs: hazard ratio (HR) = 0.51, 95% confidence interval (CI) = 0.33 to 0.81). Additionally, the HR shows an association between incomplete pregnancies and a higher BC risk, which reached 2.39 (95% CI = 1.28 to 4.45) among women who had at least three incomplete pregnancies when compared with women with zero incomplete pregnancies. This increased risk appeared to be restricted to incomplete pregnancies occurring before the first FTP (HR = 1.77, 95% CI = 1.19 to 2.63). We defined the TMAP score (defined as the Time of Breast Mitotic Activity during Pregnancies) to take into account simultaneously the opposite effect of full-term and interrupted pregnancies. Compared with women with a TMAP score of less than 0.35, an increasing TMAP score was associated with a statistically significant increase in the risk of BC (P trend = 0.02) which reached 1.97 (95% CI = 1.19 to 3.29) for a TMAP score >0.5 (versus TMAP ≤0.35). All these results appeared to be similar in BRCA1 and BRCA2. Nevertheless, our results suggest a variation in BC risk associated with parity according to the location of the mutation in BRCA1. Indeed, parity seems to be associated with a significantly decreased risk of BC only among women with a mutation in the central region of BRCA1 (low-risk region) (≥1 versus 0 FTP: HR = 0.27, 95% CI = 0.13 to 0.55) (Pinteraction <10-3). Conclusions Our findings show that, taking into account environmental and lifestyle modifiers, mutation position might be important for the clinical management of BRCA1 and BRCA2 mutation carriers and could also be helpful in understanding how BRCA1 and BRCA2 genes are involved in BC. PMID:22762150

  18. Variation in breast cancer risk associated with factors related to pregnancies according to truncating mutation location, in the French National BRCA1 and BRCA2 mutations carrier cohort (GENEPSO).

    PubMed

    Lecarpentier, Julie; Noguès, Catherine; Mouret-Fourme, Emmanuelle; Gauthier-Villars, Marion; Lasset, Christine; Fricker, Jean-Pierre; Caron, Olivier; Stoppa-Lyonnet, Dominique; Berthet, Pascaline; Faivre, Laurence; Bonadona, Valérie; Buecher, Bruno; Coupier, Isabelle; Gladieff, Laurence; Gesta, Paul; Eisinger, François; Frénay, Marc; Luporsi, Elisabeth; Lortholary, Alain; Colas, Chrystelle; Dugast, Catherine; Longy, Michel; Pujol, Pascal; Tinat, Julie; Lidereau, Rosette; Andrieu, Nadine

    2012-07-03

    Mutations in BRCA1 and BRCA2 confer a high risk of breast cancer (BC), but the magnitude of this risk seems to vary according to the study and various factors. Although controversial, there are data to support the hypothesis of allelic risk heterogeneity. We assessed variation in BC risk according to factors related to pregnancies by location of mutation in the homogeneous risk region of BRCA1 and BRCA2 in 990 women in the French study GENEPSO by using a weighted Cox regression model. Our results confirm the existence of the protective effect of an increasing number of full-term pregnancies (FTPs) toward BC among BRCA1 and BRCA2 mutation carriers (≥3 versus 0 FTPs: hazard ratio (HR) = 0.51, 95% confidence interval (CI) = 0.33 to 0.81). Additionally, the HR shows an association between incomplete pregnancies and a higher BC risk, which reached 2.39 (95% CI = 1.28 to 4.45) among women who had at least three incomplete pregnancies when compared with women with zero incomplete pregnancies. This increased risk appeared to be restricted to incomplete pregnancies occurring before the first FTP (HR = 1.77, 95% CI = 1.19 to 2.63). We defined the TMAP score (defined as the Time of Breast Mitotic Activity during Pregnancies) to take into account simultaneously the opposite effect of full-term and interrupted pregnancies. Compared with women with a TMAP score of less than 0.35, an increasing TMAP score was associated with a statistically significant increase in the risk of BC (P trend = 0.02) which reached 1.97 (95% CI = 1.19 to 3.29) for a TMAP score >0.5 (versus TMAP ≤0.35). All these results appeared to be similar in BRCA1 and BRCA2. Nevertheless, our results suggest a variation in BC risk associated with parity according to the location of the mutation in BRCA1. Indeed, parity seems to be associated with a significantly decreased risk of BC only among women with a mutation in the central region of BRCA1 (low-risk region) (≥1 versus 0 FTP: HR = 0.27, 95% CI = 0.13 to 0.55) (Pinteraction <10-3). Our findings show that, taking into account environmental and lifestyle modifiers, mutation position might be important for the clinical management of BRCA1 and BRCA2 mutation carriers and could also be helpful in understanding how BRCA1 and BRCA2 genes are involved in BC.

  19. The evolution of antipsychotic switch and polypharmacy in natural practice--a longitudinal perspective.

    PubMed

    Tsutsumi, Chisa; Uchida, Hiroyuki; Suzuki, Takefumi; Watanabe, Koichiro; Takeuchi, Hiroyoshi; Nakajima, Shinichiro; Kimura, Yoshie; Tsutsumi, Yuichiro; Ishii, Koichi; Imasaka, Yasushi; Kapur, Shitij

    2011-08-01

    Most patients with schizophrenia first start with a single antipsychotic, and yet most finally end up 'switching' or using 'polypharmacy'. The objective of this study was to examine the evolution of antipsychotic switch and polypharmacy in the real-world from a longitudinal perspective. A systematic review of longitudinal antipsychotic prescriptions in 300 patients with schizophrenia (ICD-10) for up to 2 years after their first visit to one of the 4 participating psychiatric clinics in Tokyo, Japan between January, 2007 and June, 2008, was conducted. Reasons for prescription change were also examined. The evolution of switching and polypharmacy was studied, and prescribed doses were compared to suggested dose ranges by the Texas Medication Algorithm Project (TMAP). 208 patients started their antipsychotic treatment with monotherapy. 34.1% of the patients gave up monotherapy with an initial antipsychotic to move to antipsychotic switch (27.4%) and/or polypharmacy (17.8%) within 2 years. The main reason for antipsychotic switch was 'ineffectiveness'; interestingly, this happened despite the fact that the monotherapy dose was below the recommended range in 47.4% of the antipsychotic switch. In a subgroup of 100 patients who started as antipsychotic-free, 2-year prevalence rates of switching and antipsychotic polypharmacy were 27.0% and 18.0%, respectively, and polypharmacy was resorted to after a median of 1 antipsychotic had been tried for 84 days (median). These findings raise a concern that physicians may perform an antipsychotic switch without exploring the entire dose range and resort to antipsychotic polypharmacy without trying an adequate number of antipsychotics. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Anticipated Benefits of Care (ABC): psychometrics and predictive value in psychiatric disorders.

    PubMed

    Warden, D; Trivedi, M H; Carmody, T J; Gollan, J K; Kashner, T M; Lind, L; Crismon, M L; Rush, A J

    2010-06-01

    Attitudes and expectations about treatment have been associated with symptomatic outcomes, adherence and utilization in patients with psychiatric disorders. No measure of patients' anticipated benefits of treatment on domains of everyday functioning has previously been available. The Anticipated Benefits of Care (ABC) is a new, 10-item questionnaire used to measure patient expectations about the impact of treatment on domains of everyday functioning. The ABC was collected at baseline in adult out-patients with major depressive disorder (MDD) (n=528), bipolar disorder (n=395) and schizophrenia (n=447) in the Texas Medication Algorithm Project (TMAP). Psychometric properties of the ABC were assessed, and the association of ABC scores with treatment response at 3 months was evaluated. Evaluation of the ABC's internal consistency yielded Cronbach's alpha of 0.90-0.92 for patients across disorders. Factor analysis showed that the ABC was unidimensional for all patients and for patients with each disorder. For patients with MDD, lower anticipated benefits of treatment was associated with less symptom improvement and lower odds of treatment response [odds ratio (OR) 0.72, 95% confidence interval (CI) 0.57-0.87, p=0.0011]. There was no association between ABC and symptom improvement or treatment response for patients with bipolar disorder or schizophrenia, possibly because these patients had modest benefits with treatment. The ABC is the first self-report that measures patient expectations about the benefits of treatment on everyday functioning, filling an important gap in available assessments of attitudes and expectations about treatment. The ABC is simple, easy to use, and has acceptable psychometric properties for use in research or clinical settings.

  1. MS2 bacteriophage as a delivery vessel of porphyrins for photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Cohen, Brian A.; Kaloyeros, Alain E.; Bergkvist, Magnus

    2011-02-01

    Challenges associated with photodynamic therapy (PDT) include the packaging and site-specific delivery of therapeutic agents to the tissue of interest. Nanoscale encapsulation of PDT agents inside targeted virus capsids is a novel concept for packaging and site-specific targeting. The icosahedral MS2 bacteriophage is one potential candidate for such a packaging-system. MS2 has a porous capsid with an exterior diameter of ~28 nm where the pores allow small molecules access to the capsid interior. Furthermore, MS2 presents suitable residues on the exterior capsid for conjugation of targeting ligands. Initial work by the present investigators has successfully demonstrated RNA-based self-packaging of a heterocyclic PDT agent (meso-tetrakis(para-N-trimethylanilinium)porphine, TMAP) into the MS2 capsid. Packaging photoactive compounds in confined spaces could result in energy transfer between the molecules upon photoactivation, which could in turn reduce the production of radical oxygen species (ROS). ROS are key components in photodynamic therapy, and a reduced production could negatively impact the efficacy of PDT treatment. Here, findings are presented from an investigation of ROS generation of TMAP encapsulated within the MS2 capsid compared to free TMAP in solution. Monitoring of ROS production upon photoactivation via a specific singlet oxygen assay revealed the impact on ROS generation between packaged porphyrins as compared to free porphyrin in an aqueous solution. Follow on work will study the ability of MS2-packaged porphyrins to generate ROS in vitro and subsequent cytotoxic effects on cells in culture.

  2. Positive selection and propeptide repeats promote rapid interspecific divergence of a gastropod sperm protein.

    PubMed

    Hellberg, M E; Moy, G W; Vacquier, V D

    2000-03-01

    Male-specific proteins have increasingly been reported as targets of positive selection and are of special interest because of the role they may play in the evolution of reproductive isolation. We report the rapid interspecific divergence of cDNA encoding a major acrosomal protein of unknown function (TMAP) of sperm from five species of teguline gastropods. A mitochondrial DNA clock (calibrated by congeneric species divided by the Isthmus of Panama) estimates that these five species diverged 2-10 MYA. Inferred amino acid sequences reveal a propeptide that has diverged rapidly between species. The mature protein has diverged faster still due to high nonsynonymous substitution rates (> 25 nonsynonymous substitutions per site per 10(9) years). cDNA encoding the mature protein (89-100 residues) shows evidence of positive selection (Dn/Ds > 1) for 4 of 10 pairwise species comparisons. cDNA and predicted secondary-structure comparisons suggest that TMAP is neither orthologous nor paralogous to abalone lysin, and thus marks a second, phylogenetically independent, protein subject to strong positive selection in free-spawning marine gastropods. In addition, an internal repeat in one species (Tegula aureotincta) produces a duplicated cleavage site which results in two alternatively processed mature proteins differing by nine amino acid residues. Such alternative processing may provide a mechanism for introducing novel amino acid sequence variation at the amino-termini of proteins. Highly divergent TMAP N-termini from two other tegulines (Tegula regina and Norrisia norrisii) may have originated by such a mechanism.

  3. Experimental study and modelling of deuterium thermal release from Be-D co-deposited layers

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Schwarz-Selinger, T.; Doerner, R. P.

    2014-07-01

    A study of the thermal desorption of deuterium from 1 µm thick co-deposited Be-(0.1)D layers formed at 330 K by a magnetron sputtering technique is reported. A range of thermal desorption rates 0 ⩽ β ⩽ 1.0 K s-1 are explored with a view to studying the effectiveness of the proposed ITER wall and divertor bake procedure (β = 0 K s-1) to be carried out at 513 and 623 K. Fixed temperature bake durations up to 24 h are examined. The experimental thermal release data are used to validate a model input into the Tritium Migration and Analysis Program (TMAP-7). Good agreement with experiment is observed for a TMAP-7 model incorporating trap populations of activation energies for D release of 0.80 and 0.98 eV, and a dynamically computed surface D atomic to molecular recombination rate.

  4. Building-up a database of spectro-photometric standards from the UV to the NIR

    NASA Astrophysics Data System (ADS)

    Vernet, J.; Kerber, F.; Mainieri, V.; Rauch, T.; Saitta, F.; D'Odorico, S.; Bohlin, R.; Ivanov, V.; Lidman, C.; Mason, E.; Smette, A.; Walsh, J.; Fosbury, R.; Goldoni, P.; Groot, P.; Hammer, F.; Kaper, L.; Horrobin, M.; Kjaergaard-Rasmussen, P.; Royer, F.

    2010-11-01

    We present results of a project aimed at establishing a set of 12 spectro-photometric standards over a wide wavelength range from 320 to 2500 nm. Currently no such set of standard stars covering the near-IR is available. Our strategy is to extend the useful range of existing well-established optical flux standards (Oke 1990, Hamuy et al. 1992, 1994) into the near-IR by means of integral field spectroscopy with SINFONI at the VLT combined with state-of-the-art white dwarf stellar atmospheric models (TMAP, Holberg et al. 2008). As a solid reference, we use two primary HST standard white dwarfs GD71 and GD153 and one HST secondary standard BD+17 4708. The data were collected through an ESO “Observatory Programme” over ~40 nights between February 2007 and September 2008.

  5. Attitudes Toward Medications and the Relationship to Outcomes in Patients with Schizophrenia.

    PubMed

    Campbell, Angela H; Scalo, Julieta F; Crismon, M Lynn; Barner, Jamie C; Argo, Tami R; Lawson, Kenneth A; Miller, Alexander

    The determinants of attitudes toward medication (ATM) are not well elucidated. In particular, literature remains equivocal regarding the influence of cognition, adverse events, and psychiatric symptomatology. This study evaluated relationships between those outcomes in schizophrenia and ATM. This is a retrospective analysis of data collected during the Texas Medication Algorithm Project (TMAP, n=307 with schizophrenia-related diagnoses), in outpatient clinics at baseline and every 3 months for ≥1 year (for cognition: 3rd and 9th month only). The Drug Attitude Inventory (DAI-30) measured ATM, and independent variables were: cognition (Trail Making Test [TMT], Verbal Fluency Test, Hopkins Verbal Learning Test), adverse events (Systematic Assessment for Treatment-Emergent Adverse Events, Barnes Akathisia Rating Scale), psychiatric symptomatology (Brief Psychiatric Rating Scale, Scale for Assessment of Negative Symptoms [SANS]), and medication adherence (Medication Compliance Scale). Analyses included binary logistic regression (cognition, psychiatric symptoms) and chi-square (adverse events, adherence) for baseline comparisons, and linear regression (cognition) or ANOVA (adverse events, adherence) for changes over time. Mean DAI-30 scores did not change over 12 months. Odds of positive ATM increased with higher TMT Part B scores (p=0.03) and lower SANS scores (p=0.02). Worsening of general psychopathology (p<0.001), positive symptoms (p<0.001), and negative symptoms (p=0.007) correlated with negative changes in DAI-30 scores. Relationships between cognition, negative symptoms, and ATM warrant further investigation. Studies evaluating therapies for cognitive deficits and negative symptoms should consider including ATM measures as endpoints. Patterns and inconsistencies in findings across studies raise questions about whether some factors thought to influence ATM have nonlinear relationships.

  6. Clinical vs. Self-report Versions of the Quick Inventory of Depressive Symptomatology in a Public Sector Sample

    PubMed Central

    Bernstein, Ira H.; Rush, A. John; Carmody, Thomas J.; Woo, Ada; Trivedi, Madhukar H.

    2007-01-01

    Objectives Recent work using classical test theory (CTT) and item response theory (IRT) has found that the self-report (QIDS-SR16) and clinician-rated (QIDS-C16) versions of the 16-item Quick Inventory of Depressive Symptomatology were generally comparable in outpatients with nonpsychotic major depressive disorder (MDD). This report extends this comparison to a less well-educated, more treatment-resistant sample that included more ethnic/racial minorities using IRT and selected classical test analyses. Methods The QIDS-SR16 and QIDS-C16 were obtained in a sample of 441 outpatients with nonpsychotic MDD seen in the public sector in the Texas Medication Algorithm Project (TMAP). The Samejima graded response IRT model was used to compare the QIDS-SR16 and QIDS-C16. Results The nine symptom domains in the QIDS-SR16 and QIDS-C16 related well to overall depression. The slopes of the item response functions a), which index the strength of relationship between overall depression and each symptom, were extremely similar with the two measures. Likewise, the CTT and IRT indices of symptom frequency (item means and locations of the item response functions, bi) were also similar with these two measures. For example, sad mood and difficulty with concentration/decision making were highly related to the overall depression severity with both the QIDS-C16 and QIDS-SR16. Likewise, sleeping difficulties were commonly reported, even though they were not as strongly related to overall magnitude of depression. Conclusion In this less educated, socially disadvantaged sample, differences between the QIDS-C16 and QIDS-SR16 were minor. The QIDS-SR16 is a satisfactory substitute for the more time-consuming QIDS-C16 in a broad range of adult, nonpsychotic, depressed outpatients. PMID:16716351

  7. Clinical vs. self-report versions of the quick inventory of depressive symptomatology in a public sector sample.

    PubMed

    Bernstein, Ira H; Rush, A John; Carmody, Thomas J; Woo, Ada; Trivedi, Madhukar H

    2007-01-01

    Recent work using classical test theory (CTT) and item response theory (IRT) has found that the self-report (QIDS-SR(16)) and clinician-rated (QIDS-C(16)) versions of the 16-item quick inventory of depressive symptomatology were generally comparable in outpatients with nonpsychotic major depressive disorder (MDD). This report extends this comparison to a less well-educated, more treatment-resistant sample that included more ethnic/racial minorities using IRT and selected classical test analyses. The QIDS-SR(16) and QIDS-C(16) were obtained in a sample of 441 outpatients with nonpsychotic MDD seen in the public sector in the Texas Medication Algorithm Project (TMAP). The Samejima graded response IRT model was used to compare the QIDS-SR(16) and QIDS-C(16). The nine symptom domains in the QIDS-SR(16) and QIDS-C(16) related well to overall depression. The slopes of the item response functions, a, which index the strength of relationship between overall depression and each symptom, were extremely similar with the two measures. Likewise, the CTT and IRT indices of symptom frequency (item means and locations of the item response functions, b(i) were also similar with these two measures. For example, sad mood and difficulty with concentration/decision making were highly related to the overall depression severity with both the QIDS-C(16) and QIDS-SR(16). Likewise, sleeping difficulties were commonly reported, even though they were not as strongly related to overall magnitude of depression. In this less educated, socially disadvantaged sample, differences between the QIDS-C(16) and QIDS-SR(16) were minor. The QIDS-SR(16) is a satisfactory substitute for the more time-consuming QIDS-C(16) in a broad range of adult, nonpsychotic, depressed outpatients.

  8. First result of deuterium retention in neutron-irradiated tungsten exposed to high flux plasma in TPE

    NASA Astrophysics Data System (ADS)

    Shimada, Masashi; Hatano, Y.; Calderoni, P.; Oda, T.; Oya, Y.; Sokolov, M.; Zhang, K.; Cao, G.; Kolasinski, R.; Sharpe, J. P.

    2011-08-01

    With the Japan-US joint research project Tritium, Irradiations, and Thermofluids for America and Nippon (TITAN), an initial set of tungsten samples (99.99% purity, A.L.M.T. Co.) were irradiated by high flux neutrons at 323 K to 0.025 dpa in High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). Subsequently, one of the neutron-irradiated tungsten samples was exposed to a high-flux deuterium plasma (ion flux: 5 × 1021 m-2 s-1, ion fluence: 4 × 1025 m-2) in the Tritium Plasma Experiment (TPE) at Idaho National Laboratory (INL). The deuterium retention in the neutron-irradiated tungsten was 40% higher in comparison to the unirradiated tungsten. The observed broad desorption spectrum from neutron-irradiated tungsten and associated TMAP modeling of the deuterium release suggest that trapping occurs in the bulk material at more than three different energy sites.

  9. Thermal release of D2 from new Be-D co-deposits on previously baked co-deposits

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Doerner, R. P.

    2015-12-01

    Past experiments and modeling with the TMAP code in [1, 2] indicated that Be-D co-deposited layers are less (time-wise) efficiently desorbed of retained D in a fixed low-temperature bake, as the layer grows in thickness. In ITER, beryllium rich co-deposited layers will grow in thickness over the life of the machine. Although, compared with the analyses in [1, 2], ITER presents a slightly different bake efficiency problem because of instances of prior tritium recover/control baking. More relevant to ITER, is the thermal release from a new and saturated co-deposit layer in contact with a thickness of previously-baked, less-saturated, co-deposit. Experiments that examine the desorption of saturated co-deposited over-layers in contact with previously baked under-layers are reported and comparison is made to layers of the same combined thickness. Deposition temperatures of ∼323 K and ∼373 K are explored. It is found that an instance of prior bake leads to a subtle effect on the under-layer. The effect causes the thermal desorption of the new saturated over-layer to deviate from the prediction of the validated TMAP model in [2]. Instead of the D thermal release reflecting the combined thickness and levels of D saturation in the over and under layer, experiment differs in that, i) the desorption is a fractional superposition of desorption from the saturated over-layer, with ii) that of the combined over and under -layer thickness. The result is not easily modeled by TMAP without the incorporation of a thin BeO inter-layer which is confirmed experimentally on baked Be-D co-deposits using X-ray micro-analysis.

  10. Effect of Cavity Size of Mesoporous Silica on Short DNA Duplex Stability.

    PubMed

    Masuda, Tsubasa; Shibuya, Yuuta; Arai, Shota; Kobayashi, Sayaka; Suzuki, Sotaro; Kijima, Jun; Itoh, Tetsuji; Sato, Yusuke; Nishizawa, Seiichi; Yamaguchi, Akira

    2018-05-15

    We studied the stabilities of short (4- and 3-bp) DNA duplexes within silica mesopores modified with a positively charged trimethyl aminopropyl (TMAP) monolayer (BJH pore diameter 1.6-7.4 nm). The DNA fragments with fluorescent dye were introduced into the pores, and their fluorescence resonance energy transfer (FRET) response was measured to estimate the structuring energies of the short DNA duplexes under cryogenic conditions (temperature 233-323 K). The results confirmed the enthalpic stability gain of the duplex within size-matched pores (1.6 and 2.3 nm). The hybridization equilibrium constants found for the size-matched pores were 2 orders of magnitude larger than those for large pores (≥3.5 nm), and this size-matching effect for the enhanced duplex stability was explained by a tight electrostatic interaction between the duplex and the surface TMAP groups. These results indicate the requirement of the precise regulation of mesopore size to ensure the stabilization of hydrogen-bonded supramolecular assemblies.

  11. Uncertainties in (E)UV model atmosphere fluxes

    NASA Astrophysics Data System (ADS)

    Rauch, T.

    2008-04-01

    Context: During the comparison of synthetic spectra calculated with two NLTE model atmosphere codes, namely TMAP and TLUSTY, we encounter systematic differences in the EUV fluxes due to the treatment of level dissolution by pressure ionization. Aims: In the case of Sirius B, we demonstrate an uncertainty in modeling the EUV flux reliably in order to challenge theoreticians to improve the theory of level dissolution. Methods: We calculated synthetic spectra for hot, compact stars using state-of-the-art NLTE model-atmosphere techniques. Results: Systematic differences may occur due to a code-specific cutoff frequency of the H I Lyman bound-free opacity. This is the case for TMAP and TLUSTY. Both codes predict the same flux level at wavelengths lower than about 1500 Å for stars with effective temperatures (T_eff) below about 30 000 K only, if the same cutoff frequency is chosen. Conclusions: The theory of level dissolution in high-density plasmas, which is available for hydrogen only should be generalized to all species. Especially, the cutoff frequencies for the bound-free opacities should be defined in order to make predictions of UV fluxes more reliable.

  12. Transmasseteric anterior parotid approach for condylar fractures: experience of 129 cases.

    PubMed

    Narayanan, Vinod; Ramadorai, Ashok; Ravi, Poornima; Nirvikalpa, Natarajan

    2012-07-01

    We have evaluated the transmasseteric anterior parotid (TMAP) approach in the treatment of 163 condylar fractures in 129 patients. Ninety-five patients presented with unilateral, and 34 with bilateral, fractures. The inclusion criteria were patient's choice for open reduction and internal fixation, displaced unilateral condylar fractures with occlusal derangement, and displaced bilateral condylar fractures with anterior open bite. Mean (SD) maximum interincisal opening after 3 months was 44(5)mm. There were no differences in lateral movements during the reviews 6 weeks and 3 months postoperatively. Protrusive movement at the end of 3 months was 7(2)mm. All patients achieved functional occlusion identical to the pretraumatic occlusion and good reduction of the condyles. No patient developed temporary or permanent facial palsy, sialocele, salivary fistula, or Frey syndrome. The mean (SD) operating time was 46(11)min. The TMAP approach avoids the complications of incision of the parotid gland, minimises the risk of facial nerve palsy, and offers excellent access to the fractured condyle. Copyright © 2011 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  14. Azapsoralens: new potential photochemotherapeutic agents for psoriasis.

    PubMed

    Vedaldi, D; Caffieri, S; Miolo, G; Dall'Acqua, F; Baccichetti, F; Guiotto, A; Benetollo, F; Bombieri, G; Recchia, G; Cristofolini, M

    1991-12-01

    New bioisoters of psoralen, obtained by replacing carbon 8 of the central benzene ring with a nitrogen, were studied from the photochemical, photobiological and phototherapeutic points of view. In particular, 4,4'-, 4',5'-dimetyl, 4,4',5'-trimethyl and 3,4,4',5'-tetramethylazapsoralen were studied. The crystal and molecular structure of 4,4',5'-trimethylazapsoralen, obtained by X ray diffraction, was also reported. Like psoralen, these compounds form a molecular complex with DNA, undergoing intercalation inside the double helix of the macromolecule. When irridiated with long ultraviolet light (365 nm), the intercalated drug photoconjugates covalently to the macromolecule, forming mono- and diadducts. The photobinding rate show the following order of magnitude: 4,4',5'-trimetylazapsoralen (4,4',5'-TMAP) = 3,4,4',5'-tetramethylazapsoralen (3,4,4',5'-TMAP) greater than 4',5'-dimethylazapsoralen (4',5'-DMAP) = 4,4'-dimethylazapsoralen (4,4'-DMAP). The DNA photobinding rate of 8-methoxypsoralen (8-MOP), taken as reference compound, is similar to that of the two dimetylazapsoralens but lower than tri- and tetramethyl derivatives. The ability of azapsoralens to form cross-links in DNA is lower than that of 8-MOP. However, capacity to induce cross-links does not parallel the DNA photobinding rate; it is higher for trimethyl derivate and lower for tetramethylazapsoralen. Azapsoralens show evident antiproliferative activity. The trimethyl derivative is the most active, followed by tetrametyl, both these compounds showing activity slightly higher than that of 8-MOP. The two dimethylderivatives are less active. The mautagenic activity of azapsoralens on E. coli WP2 TM6 is lower than that of 8-MOP in the same conditions. The new compounds do not show any skin phototoxicity on guinea pig skin. On the basis of its DNA photobinding, antiproliferative activity, mutagenicity and lack of skin phototoxicity, 4,4',5'-TMAP was chosen for clinical evaluation. Clinical results obtained by topical treatment of psoriatic plaques reveal evident therapeutic effectiveness and clearing is between good and moderate, although 8-MOP, used as reference compound, is more effective.

  15. Gender differences of the morphology of the distal femur and proximal tibia in a Korean population.

    PubMed

    Lim, Hong-Chul; Bae, Ji-Hoon; Yoon, Ji-Yeol; Kim, Seung-Ju; Kim, Jae-Gyoon; Lee, Jae-Moon

    2013-01-01

    We conducted this study to determine whether the sizes of distal femurs and proximal tibiae in Korean men and women are different, and to assess suitability of the sizes of prostheses currently used in Korea. We performed morphological analysis of proximal tibia and distal femur on 115 patients (56 male, 59 female) using MRI to investigate a gender difference. Tibial mediolateral dimension (tMAP), tibial medial anteroposterior dimension (tMAP), tibial lateral anteroposterior dimension (tLAP) femoral mediolateral dimension (fML), femoral medial anteroposterior dimension (fMAP), and femoral lateral anteroposterior dimension (fLAP) were measured. The ratio of tMAP and tLAP to tML (plateau aspect ratio, tAP/tML×100%), and that of fMAP and fLAP to fML (condylar aspect ratio, fAP/fML×100%) were calculated. The measurements were compared with the similar dimensions of four total knee implants currently used. The tML and tAP lengths showed a significant gender difference (P<0.05). The plateau aspect ratio (tMAP/tML) revealed a significant difference between male (0.74±0.05) and female (0.68±0.04, P<0.05). For morphotype of distal femur, males were found to have significantly large values (P<0.05) in the parameters, except for fLAP. With regards to the ratio of the ML width to the AP length, the women showed a narrower ML width than the men. Both genders were distributed within the range of the dimensions of the prostheses currently used prostheses. Korean population revealed that women have smaller dimensions than male counterparts. In both genders, a relatively small size of prostheses matches distal femur and proximal tibia better among the implants currently used in Korea. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  17. Reduced projection angles for binary tomography with particle aggregation.

    PubMed

    Al-Rifaie, Mohammad Majid; Blackwell, Tim

    This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.

  18. Modeling Issues and Results for Hydrogen Isotopes in NIF Materials

    NASA Astrophysics Data System (ADS)

    Grossman, Arthur A.; Doerner, R. P.; Luckhardt, S. C.; Seraydarian, R.; Sze, D.; Burnham, A.

    1998-11-01

    The TMAP4 (G. Longhurst, et al. INEL 1992) model of hydrogen isotope transport in solid materials includes a particle diffusion calculation with Fick's Law modified for Soret Effect (Thermal Diffusion or Thermomigration), coupled to heat transport calculations which are needed because of the strong temperature dependence of diffusivity. These TMAP4 calculations applied to NIF show that high temperatures approaching the melting point and strong thermal gradients of 10^6 K/cm are reached in the first micron of wall material during the SXR pulse. These strong thermal gradients can drive hydrogen isotope migration up or down the thermal gradient depending on the sign of the heat of transport (Soret coefficient) which depends on whether the material dissolves hydrogen endothermically or exothermically. Two candidates for NIF wall material-boron carbide and stainless steel are compared. Boron carbide dissolves hydrogen exothermically so it may drive Soret migration down the thermal gradient deeper into the material, although the thermal gradient is not as large and hydrogen is not as mobile as in stainless steel. Stainless steel dissolves hydrogen endothermically, with a negative Soret coefficient which can drive hydrogen up the thermal gradient and out of the wall.

  19. Deuterium transport in Cu, CuCrZr, and Cu/Be

    NASA Astrophysics Data System (ADS)

    Anderl, R. A.; Hankins, M. R.; Longhurst, G. R.; Pawelko, R. J.

    This paper presents the results of deuterium implantation/permeation experiments and TMAP4 simulations for a CuCrZr alloy, for OFHC-Cu and for a Cu/Be bi-layered structure at temperatures from 700 to 800 K. Experiments used a mass-analyzed, 3-keV D 3+ ion beam with particle flux densities of 5 × 10 19 to 7 × 10 19 D/m 2 s. Effective diffusivities and surface molecular recombination coefficients were derived giving Arrhenius pre-exponentials and activation energies for each material: CuCrZr alloy, (2.0 × 10 -2 m 2/s, 1.2 eV) for diffusivity and (2.9 × x10 -14 m 4/s, 1.92 eV) for surface molecular recombination coefficients; OFHC Cu, (2.1 × 10 -6 m 2/s, 0.52 eV) for diffusivity and (9.1 × 10 -18 m 4/s, 0.99 eV) for surface molecular recombination coefficients. TMAP4 simulation of permeation data measured for a Cu/Be bi-layer sample was achieved using a four-layer structure (Cu/BeO interface/Be/BeO back surface) and recommended values for diffusivity and solubility in Be, BeO and Cu.

  20. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  1. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    PubMed

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  2. Structure elucidation of metabolite x17299 by interpretation of mass spectrometric data.

    PubMed

    Zhang, Qibo; Ford, Lisa A; Evans, Anne M; Toal, Douglas R

    2017-01-01

    A major bottleneck in metabolomic studies is metabolite identification from accurate mass spectrometric data. Metabolite x17299 was identified in plasma as an unknown in a metabolomic study using a compound-centric approach where the associated ion features of the compound were used to determine the true molecular mass. The aim of this work is to elucidate the chemical structure of x17299, a new compound by de novo interpretation of mass spectrometric data. An Orbitrap Elite mass spectrometer was used for acquisition of mass spectra up to MS 4 at high resolution. Synthetic standards of N,N,N -trimethyl-l-alanyl-l-proline betaine (l,l-TMAP), a diastereomer, and an enantiomer were chemically prepared. The planar structure of x17299 was successfully proposed by de novo mechanistic interpretation of mass spectrometric data without any laborious purification and nuclear magnetic resonance spectroscopic analysis. The proposed structure was verified by deuterium exchanged mass spectrometric analysis and confirmed by comparison to a synthetic standard. Relative configuration of x17299 was determined by direct chromatographic comparison to a pair of synthetic diastereomers. Absolute configuration was assigned after derivatization of x17299 with a chiral auxiliary group followed by its chromatographic comparison to a pair of synthetic standards. The chemical structure of metabolite x17299 was determined to be l,l-TMAP.

  3. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  4. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  5. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  6. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  7. Increasing Prediction the Original Final Year Project of Student Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Saragih, Rijois Iboy Erwin; Turnip, Mardi; Sitanggang, Delima; Aritonang, Mendarissan; Harianja, Eva

    2018-04-01

    Final year project is very important forgraduation study of a student. Unfortunately, many students are not seriouslydidtheir final projects. Many of studentsask for someone to do it for them. In this paper, an application of genetic algorithms to predict the original final year project of a studentis proposed. In the simulation, the data of the final project for the last 5 years is collected. The genetic algorithm has several operators namely population, selection, crossover, and mutation. The result suggest that genetic algorithm can do better prediction than other comparable model. Experimental results of predicting showed that 70% was more accurate than the previous researched.

  8. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  9. Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis

    PubMed Central

    Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-01-01

    Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408

  10. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  11. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  12. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  13. Projection pursuit water quality evaluation model based on chicken swam algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  14. The Inventory of Depressive Symptomatology, Clinician Rating (IDS-C) and Self-Report (IDS-SR), and the Quick Inventory of Depressive Symptomatology, Clinician Rating (QIDS-C) and Self-Report (QIDS-SR) in public sector patients with mood disorders: a psychometric evaluation.

    PubMed

    Trivedi, M H; Rush, A J; Ibrahim, H M; Carmody, T J; Biggs, M M; Suppes, T; Crismon, M L; Shores-Wilson, K; Toprac, M G; Dennehy, E B; Witte, B; Kashner, T M

    2004-01-01

    The present study provides additional data on the psychometric properties of the 30-item Inventory of Depressive Symptomatology (IDS) and of the recently developed Quick Inventory of Depressive Symptomatology (QIDS), a brief 16-item symptom severity rating scale that was derived from the longer form. Both the IDS and QIDS are available in matched clinician-rated (IDS-C30; QIDS-C16) and self-report (IDS-SR30; QIDS-SR16) formats. The patient samples included 544 out-patients with major depressive disorder (MDD) and 402 out-patients with bipolar disorder (BD) drawn from 19 regionally and ethnicically diverse clinics as part of the Texas Medication Algorithm Project (TMAP). Psychometric analyses including sensitivity to change with treatment were conducted. Internal consistencies (Cronbach's alpha) ranged from 0.81 to 0.94 for all four scales (QIDS-C16, QIDS-SR16, IDS-C30 and IDS-SR30) in both MDD and BD patients. Sad mood, involvement, energy, concentration and self-outlook had the highest item-total correlations among patients with MDD and BD across all four scales. QIDS-SR16 and IDS-SR30 total scores were highly correlated among patients with MDD at exit (c = 0.83). QIDS-C16 and IDS-C30 total scores were also highly correlated among patients with MDD (c = 0.82) and patients with BD (c = 0.81). The IDS-SR30, IDS-C30, QIDS-SR16, and QIDS-C16 were equivalently sensitive to symptom change, indicating high concurrent validity for all four scales. High concurrent validity was also documented based on the SF-12 Mental Health Summary score for the population divided in quintiles based on their IDS or QIDS score. The QIDS-SR16 and QIDS-C16, as well as the longer 30-item versions, have highly acceptable psychometric properties and are treatment sensitive measures of symptom severity in depression.

  15. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  16. X-ray Modeling of Classical Novae

    NASA Astrophysics Data System (ADS)

    Nemeth, Peter

    2010-01-01

    It has been observed and theoretically supported in the last decade that the peak of the spectral energy distribution of classical novae gradually shifts to higher energies at constant bolometric luminosity after a nova event. For this reason, comprehensive evolutionary studies require spectral analysis in multiple spectral bands. After a nova explosion, the white dwarf can maintain stable surface hydrogen burning, the duration of which strongly correlates with the white dwarf mass. During this stage the peak of the luminosity is in the soft X-ray band (15 - 60 Angstroms). By extending the modeling range of TLUSTY/SYNSPEC, I analyse the luminosity and abundance evolution of classical novae. Model atoms required for this work were built using atomic data from NIST/ASD and TOPBASE. The accurate but incomplete set of energy levels and radiative transitions in NIST were completed with calculated data from TOPBASE. Synthetic spectra were then compared to observed data to derive stellar parameters. I show the capabilities and validity of this project on the example of V4743 Sgr. This nova was observed with both Chandra and XMM-Newton observatories and has already been modeled by several scientific groups (PHOENIX, TMAP).

  17. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  18. Stellar atmosphere modeling of extremely hot, compact stars

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas; Ringat, Ellen; Werner, Klaus

    Present X-ray missions like Chandra and XMM-Newton provide excellent spectra of extremely hot white dwarfs, e.g. burst spectra of novae. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) can calculate such model at-mospheres and spectral energy distributions at a high level of sophistication. We present a new grid of models that is calculated in the parameter range of novae and supersoft X-ray sources and show examples of their application.

  19. Spectral Analysis within the Virtual Observatory: The GAVO Service TheoSSA

    NASA Astrophysics Data System (ADS)

    Ringat, E.

    2012-03-01

    In the last decade, numerous Virtual Observatory organizations were established. One of these is the German Astrophysical Virtual Observatory (GAVO) that e.g. provides access to spectral energy distributions via the service TheoSSA. In a pilot phase, these are based on the Tübingen NLTE Model-Atmosphere Package (TMAP) and suitable for hot, compact stars. We demonstrate the power of TheoSSA in an application to the sdOB primary of AA Doradus by comparison with a “classical” spectral analysis.

  20. UV spectroscopy including ISM line absorption: of the exciting star of Abell 35

    NASA Astrophysics Data System (ADS)

    Ziegler, M.; Rauch, T.; Werner, K.; Kruk, J. W.

    Reliable spectral analysis that is based on high-resolution UV observations requires an adequate, simultaneous modeling of the interstellar line absorption and reddening. In the case of the central star of the planetary nebula Abell 35, BD-22 3467, we demonstrate our current standard spectral-analysis method that is based on the Tübingen NLTE Model-Atmosphere Package (TMAP). We present an on- going spectral analysis of FUSE and HST/STIS observations of BD-22 3467.

  1. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    NASA Astrophysics Data System (ADS)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  2. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  3. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penfold, S; Casiraghi, M; Dou, T

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less

  4. A Methodology for Projecting U.S.-Flag Commercial Tanker Capacity

    DTIC Science & Technology

    1986-03-01

    total crude supply for the total US is less than the sum of the total crude supplies of the PADDs . The algorithm generating the output shown in tables...other PADDs . Accordingly, projected receipts for PADD V are zero , and in conjunction with the values for the vari- ables that previously were...SHIPMENTS ALGORITHM This section presents the mathematics of the algorithm that generates the shipments projections for each PADD . The notation

  5. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  6. Access Restoration Project Task 1.2 Report 2 (of 2) Algorithms for Debris Volume and Water Depth Computation : Appendix A

    DOT National Transportation Integrated Search

    0000-01-01

    n the Access Restoration Project Task 1.2 Report 1, the algorithms for detecting roadway debris piles and flooded areas were described in detail. Those algorithms take CRS data as input and automatically detect the roadway obstructions. Although the ...

  7. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  8. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  9. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  10. Mechanistic aspects of the photodynamic inactivation of Candida albicans induced by cationic porphyrin derivatives.

    PubMed

    Quiroga, Ezequiel D; Cormick, M Paula; Pons, Patricia; Alvarez, M Gabriela; Durantini, Edgardo N

    2012-12-01

    Photodynamic inactivation of Candida albicans produced by 5-(4-trifluorophenyl)-10,15,20-tris(4-N,N,N-trimethylammoniumphenyl)porphyrin (TFAP(3+)), 5,10,15,20-tetrakis(4-N,N,N-trimethylammoniumphenyl)porphyrin (TMAP(4+)) and 5,10,15,20-tetrakis(4-N-methylpyridyl)porphyrin (TMPyP(4+)) was investigated to obtain insight about the mechanism of cellular damage. In solution, absorption spectroscopic studies showed that these cationic porphyrins interact strongly with calf thymus DNA. The electrophoretic analysis indicated that photocleavage of DNA induced by TFAP(3+) took place after long irradiation periods (>5 h). In contrast, TMAP(4+) produced a marked reduction in DNA band after 1 h irradiation. In C. albicans, these cationic porphyrins produced a ∼3.5 log decrease in survival when the cell suspensions (10(7) cells/mL) were incubated with 5 μM photosensitizer and irradiated for 30 min with visible light (fluence 162 J/cm(2)). After this treatment, modifications of genomic DNA isolated from C. albicans cells were not found by electrophoresis. Furthermore, transmission electron microscopy showed structural changes with appearance of low density areas into the cells and irregularities in cell barriers. However, the photodamage to the cell envelope was insufficient to cause the release of intracellular biopolymers. Therefore, modifications in the cytoplasmic biomolecules and alteration in the cell barriers could be mainly involved in C. albicans photoinactivation. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  11. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  12. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  13. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  14. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  15. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  16. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  17. Iterative projection algorithms for ab initio phasing in virus crystallography.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2016-12-01

    Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Handling of Atomic Data

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Deetjen, J. L.

    2003-01-01

    State-of-the-art NLTE model atmosphere codes have arrived at a high level of ``numerical'' sophistication and are an adequate tool to analyze the available high-quality spectra from the infrared to the X-ray wavelength range. The computational capacities allow the calculation which include all elements from hydrogen up to the iron group and the lack of reliable atomic data has become a crucial problem for further progress. We summarize briefly the available sources of atomic data and how these are implemented in the Tübingen Model Atmosphere Package (TMAP).

  19. A simplified model for tritium permeation transient predictions when trapping is active*1

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.

  20. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  1. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  2. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  3. GPU-based Branchless Distance-Driven Projection and Backprojection

    PubMed Central

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-01-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480

  4. GPU-based Branchless Distance-Driven Projection and Backprojection.

    PubMed

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  5. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    PubMed

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  6. A blind transform based approach for the detection of isolated astrophysical pulses

    NASA Astrophysics Data System (ADS)

    Alkhweldi, Marwan; Schmid, Natalia A.; Prestage, Richard M.

    2017-06-01

    This paper presents a blind algorithm for the automatic detection of isolated astrophysical pulses. The detection algorithm is applied to spectrograms (also known as "filter bank data" or "the (t,f) plane"). The detection algorithm comprises a sequence of three steps: (1) a Radon transform is applied to the spectrogram, (2) a Fourier transform is applied to each projection parametrized by an angle, and the total power in each projection is calculated, and (3) the total power of all projections above 90° is compared to the total power of all projections below 90° and a decision in favor of an astrophysical pulse present or absent is made. Once a pulse is detected, its Dispersion Measure (DM) is estimated by fitting an analytically developed expression for a transformed spectrogram containing a pulse, with varying value of DM, to the actual data. The performance of the proposed algorithm is numerically analyzed.

  7. Algorithm of choosing type of mechanical assembly production of instrument making enterprises of Industry 4.0

    NASA Astrophysics Data System (ADS)

    Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.; Zharinov, O. O.

    2018-05-01

    The task of the algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is being studied. There is a comparison of two project algorithms for Industry 3.0 and Industry 4.0. The algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is based on the technological route analysis of the manufacturing process in a company equipped with cyber and physical systems. This algorithm may give some project solutions selected from the primary part or the auxiliary one of the production. The algorithm decisive rules are based on the optimal criterion.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  9. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  10. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selectedmore » from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.« less

  11. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  12. Rapid, Standardized Method for Determination of Mycobacterium tuberculosis Drug Susceptibility by Use of Mycolic Acid Analysis▿

    PubMed Central

    Parrish, Nicole; Osterhout, Gerard; Dionne, Kim; Sweeney, Amy; Kwiatkowski, Nicole; Carroll, Karen; Jost, Kenneth C.; Dick, James

    2007-01-01

    Multidrug-resistant (MDR) Mycobacterium tuberculosis and extrensively drug-resistant (XDR) M. tuberculosis are emerging public health threats whose threats are compounded by the fact that current techniques for testing the susceptibility of M. tuberculosis require several days to weeks to complete. We investigated the use of high-performance liquid chromatography (HPLC)-based quantitation of mycolic acids as a means of rapidly determining drug resistance and susceptibility in M. tuberculosis. Standard susceptibility testing and determination of the MICs of drug-susceptible (n = 26) and drug-resistant M. tuberculosis strains, including MDR M. tuberculosis strains (n = 34), were performed by using the Bactec radiometric growth system as the reference method. The HPLC-based susceptibilities of the current first-line drugs, isoniazid (INH), rifampin (RIF), ethambutol (EMB), and pyrazinamide (PZA), were determined. The vials were incubated for 72 h, and aliquots were removed for HPLC analysis by using the Sherlock mycobacterial identification system. HPLC quantitation of total mycolic acid peaks (TMAPs) was performed with treated and untreated cultures. At 72 h, the levels of agreement of the HPLC method with the reference method were 99.5% for INH, EMB, and PZA and 98.7% for RIF. The inter- and intra-assay reproducibilities varied by drug, with an average precision of 13.4%. In summary, quantitation of TMAPs is a rapid, sensitive, and accurate method for antibiotic susceptibility testing of all first-line drugs currently used against M. tuberculosis and offers the potential of providing susceptibility testing results within hours, rather than days or weeks, for clinical M. tuberculosis isolates. PMID:17913928

  13. [Significance and mechanism of MSCT perfusion scan on differentiation of NSCLC].

    PubMed

    Liu, Jin-Kang; Hu, Cheng-Ping; Zhou, Mo-Ling; Zhou, Hui; Xiong, Zeng; Xia, Yu; Chen, Wei

    2009-06-01

    To determine the significance of MSCT perfusion scan on differentiation of NSCLC and to investigate its possible mechanisms. Forty four NSCLC patients underwent CT perfusion scan by MSCT. Among them, 22 cases were selected to detected the two-dimensional tumor microvascular architecture phenotype (2D-TMAP), the relationships between CT perfusion parameters (BF, BV, PEI, TIP), and the differentiation of NSCLC were analysed by using the correlation analysis and trend test. Spearman correlation analysis was used to study the relationships between CT perfusion parameters, differentiation, and 2D-TMAP. The total BF, BV and PEI decreased with decreasing differentiation of NSCLC (P<0.05). The total PEI showed a positive correlation with the total MVD (P<0.05). There were negative correlations between the surrounding area BF, the total BF, BV, and PEI, the uncomplete lumen of the surrounding area MVD, and expression of PCNA, respectively (P<0.05). There were positive correlations between degree of differentiation and the uncomplete lumen of the surrounding area MVD (P<0.05). It was the same as degree of differentiation and expression of PCNA, VEGF, respectively. There were positive correlations between the uncomplete lumen of the surrounding area MVD and expression of VEGF, ephrinB2, EphB4, and PCNA, respectively (P<0.05). Perfusion parameters reflect the difference of density of vassels with mature functional lumen. Careful evaluation of the differences of blood flow pattern in pulmonary space-occupying lesions by MSCT perfusion scan can be used to identify the degree of NSCLC differentiation.

  14. A simple HPLC method for simultaneous determination of lopinavir, ritonavir and efavirenz.

    PubMed

    Usami, Yoshiko; Oki, Tsuyoshi; Nakai, Masahiko; Sagisaka, Masafumi; Kaneda, Tsuguhiro

    2003-06-01

    We developed a simple HPLC method for the simultaneous determination of lopinavir (LPV), ritonavir (RTV) and efavirenz (EFV) to evaluate the efficiency of co-administration of LPV/RTV and EFV in Japanese patients enrolled in a clinical study. The monitoring of LPV plasma concentration is important because co-administration of LPV/RTV with EFV sometimes decreases plasma concentrations of LPV caused by EFV activation of cytochrome P-450 3A. A solution of acetonitrile, methanol and tetramethylammonium perchlorate (TMAP) in dilute aqueous trifluoroacetic acid (TFA) has been used as the mobile phase in a HPLC method to elute LPV and RTV. We found that a solvent ratio of 45 : 5 : 50 (v/v/v) of acetonitrile/methanol/0.02 M TMAP in 0.2% TFA optimized separation of LPV, RTV and EFV. A column temperature of 30 degrees C was necessary for the reproducibility of the analyses. Standard curves were linear in the range 0.060 to 24.06 micro g/ml for LPV, 0.010 to 4.16 micro g/ml for RTV, and 0.047 to 37.44 micro g/ml for EFV. Coefficients of variation (CVs) of LPV, RTV and EFV in intraday and interday assays ranged from 1.5 to 4.0%, 2.5 to 16.8% and 1.0 to 7.7%, respectively. Accuracies ranged from 100 to 110%, 101 to 116% and 99 to 106% for LPV, RTV and EFV, respectively. The extraction recoveries were 77-87, 77-83 and 81-91% for LPV, RTV and EFV, respectively.

  15. Statistical method to compare massive parallel sequencing pipelines.

    PubMed

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  16. Hydrogen isotope transport across tungsten surfaces exposed to a fusion relevant He ion fluence

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Doerner, R. P.

    2017-07-01

    Tungsten targets are exposed to controlled sequences of D2 and He, and He and D2 plasma in the Pisces-A linear plasma device, with a view to studying the outward and inward transport of D across a He implanted surface, using thermal desorption mass spectrometry. Differences in transport are interpreted from changes in peak desorption temperature and amplitude for D2 release, compared against that of control targets exposed to just D2 plasma. Desorption data are modeled with Tmap-7 to infer the nature by which He leads to the ‘reduced inventory’ effect for H isotope uptake. A dual segment (surface-30 nm, bulk) W Tmap-7 model is developed, that simulates both plasma exposure and thermal desorption. Good agreement between desorption data and model is found for D2 release from control targets provided that the implanted flux is reduced, similar to that reported by others. For He affected release, the H isotope transport properties of the surface segment are adjusted away from control target bulk values during the computation. Modeling that examines outward D transport through the He implanted layer suggests that a permeation barrier is active, but bubble induced porosity is insufficient to fully explain the barrier strength. Moderately increased diffusional migration energy in the model over the He affected region, however, gives a barrier strength consistent with experiment. The same model, applied to inward transport, predicts the reduced inventory effect, but a further reduction in the implanted D flux is necessary for precise agreement.

  17. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  18. Three-Dimensional Weighting in Cone Beam FBP Reconstruction and Its Transformation Over Geometries.

    PubMed

    Tang, Shaojie; Huang, Kuidong; Cheng, Yunyong; Niu, Tianye; Tang, Xiangyang

    2018-06-01

    With substantially increased number of detector rows in multidetector CT (MDCT), axial scan with projection data acquired along a circular source trajectory has become the method-of-choice in increasing clinical applications. Recognizing the practical relevance of image reconstruction directly from the projection data acquired in the native cone beam (CB) geometry, especially in scenarios wherein the most achievable in-plane resolution is desirable, we present a three-dimensional (3-D) weighted CB-FBP algorithm in such geometry in this paper. We start the algorithm's derivation in the cone-parallel geometry. Via changing of variables, taking the Jacobian into account and making heuristic and empirical assumptions, we arrive at the formulas for 3-D weighted image reconstruction in the native CB geometry. Using the projection data simulated by computer and acquired by an MDCT scanner, we evaluate and verify performance of the proposed algorithm for image reconstruction directly from projection data acquired in the native CB geometry. The preliminary data show that the proposed algorithm performs as well as the 3-D weighted CB-FBP algorithm in the cone-parallel geometry. The proposed algorithm is anticipated to find its utility in extensive clinical and preclinical applications wherein the reconstruction of images in the native CB geometry, i.e., the geometry for data acquisition, is of relevance.

  19. Object-Image Correspondence for Algebraic Curves under Projections

    NASA Astrophysics Data System (ADS)

    Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon

    2013-03-01

    We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  20. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  1. 3-D shape estimation of DNA molecules from stereo cryo-electron micro-graphs using a projection-steerable snake.

    PubMed

    Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael

    2006-01-01

    We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

  2. Arsenic bioaccumulation and biotransformation in deep-sea hydrothermal vent organisms from the PACMANUS hydrothermal field, Manus Basin, PNG

    NASA Astrophysics Data System (ADS)

    Price, Roy E.; Breuer, Christian; Reeves, Eoghan; Bach, Wolfgang; Pichler, Thomas

    2016-11-01

    Hydrothermal vents are often enriched in arsenic, and organisms living in these environments may accumulate high concentrations of this and other trace elements. However, very little research to date has focused on understanding arsenic bioaccumulation and biotransformation in marine organisms at deep-sea vent areas; none to date have focused organisms from back-arc spreading centers. We present for the first time concentration and speciation data for As in vent biota from several hydrothermal vent fields in the eastern Manus basin, a back-arc basin vent field located in the Bismark Sea, western Pacific Ocean. The gastropods Alviniconcha hessleri and Ifremeria nautilei, and the mussel Bathymodiolus manusensis were collected from diffuse venting areas where pH was slightly lower (6.2-6.8), and temperature (26.8-10.5 °C) and arsenic concentrations (169.5-44.0 nM) were higher than seawater. In the tissues of these organisms, the highest total measured As concentrations were in the gills of A. hessleri (5580 mg kg-1), with 721 mg kg-1 and 43 mg kg-1 in digestive gland and muscle, respectively. I. nautilei contained 118 mg kg-1 in the gill, 108 mg kg-1 in the digestive gland and 22 mg kg-1 in the muscle. B. manusensis contained 15.7 mg kg-1 in the digestive gland, followed by 9.8 mg kg-1 and 4.5 mg kg-1 in its gill and muscle tissue, respectively. We interpret the decreasing overall total concentrations in each organism as a function of distance from the source of hydrothermally derived As. The high concentration of arsenic in A. hessleri gills may be associated with elemental sulfur known to occur in this organism as a result of symbiotic microorganisms. Arsenic extracted from freeze-dried A. hessleri tissue was dominated by AsIII and AsV in the digestive gland (82% and 16%, respectively) and gills (97% AsIII, 2.3% AsV), with only 1.8% and 0.2% arsenobetaine (As-Bet) in the digestive gland and gills, respectively. However, the muscle contained substantial amounts of As-Bet (42% As-Bet compared to 48% AsIII and 10% AsV), suggesting As-Bet is a metabolite. Trace arsenosugar (SO4-sug) was observed in digestive gland and gills only. The other snail, I. nautilei, was also dominated by AsIII and AsV in digestive glands (82, 10%) and gills (80, 10%), with 6-9% As-Bet, but its muscle contained 62% As-Bet and 32% AsIII, with 7% trimethylarsoniopropionate (TMAP). Trace dimethylarsinic acid (DMAV) was observed in its gills, and trace TMAP and arsenocholine (AC) was observed in digestive glands. The mussel B. manusensis was dominated by As-Bet in all three tissue types. Digestive gland and gills contained 22% AsIII, 5-10% AsV, 20-25% DMAV, along with some TMAP and tetramethylarsonium ion (TETRA). However, the muscle contained significantly more As-Bet (91.6%), with the only other species being AsIII (8.4%). Unfortunately, as is often the case in bioaccumulation and biotransformation studies, extraction efficiencies were low, limiting any rigorous interpretation of arsenic biotransformation patterns. Through process of elimination, we suggest that arsenosugars may be synthesized by H2S-oxidizing chemotrophic microbial mats, ultimately leading to the syntheses of As-Bet within vent organisms. However, because As-sugs rarely occur in deep-sea vent organisms, As-Bet, as well as TMAP, AC, and TETRA could also potentially be synthesized directly by the "Edmonds" pathway, the proposed arseno-analog to amino acid formation, without the necessity for arsenosugar formation as an intermediate. Future research should endeavor for more comprehensive extraction of organoarsenicals.

  3. Detection with Enhanced Energy Windowing Phase I Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, David A.; Enders, Alexander L.

    2016-12-01

    This document reviews the progress of Phase I of the Detection with Enhanced Energy Windowing (DEEW) project. The DEEW project is the implementation of software incorporating an algorithm which reviews data generated by radiation portal monitors and utilizes advanced and novel techniques for detecting radiological and fissile material while not alarming on Naturally Occurring Radioactive Material. Independent testing indicated that the Enhanced Energy Windowing algorithm showed promise at reducing the probability of alarm in the stream of commerce compared to existing algorithms and other developmental algorithms, while still maintaining adequate sensitivity to threats. This document contains a brief description ofmore » the project, instructions for setting up and running the applications, and guidance to help make reviewing the output files and source code easier.« less

  4. Model-based sphere localization (MBSL) in x-ray projections

    NASA Astrophysics Data System (ADS)

    Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc

    2017-08-01

    The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.

  5. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  6. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  7. Multi-period project portfolio selection under risk considerations and stochastic income

    NASA Astrophysics Data System (ADS)

    Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood

    2018-02-01

    This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.

  8. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  9. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  10. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  11. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  12. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    NASA Astrophysics Data System (ADS)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  13. Transmasseteric antero-parotid approach for open reduction and internal fixation of condylar fractures.

    PubMed

    Wilson, A W; Ethunandan, M; Brennan, P A

    2005-02-01

    The morbidity that results from surgical approaches to the condylar neck, and the time-consuming nature of the operation inhibits many surgeons from using open reduction and internal fixation for the treatment of condylar fractures. The many approaches that have been described stand testimony to the disadvantages of the individual techniques. The most common problems are limited access and injury to the facial nerve. We describe the transmasseteric antero-parotid (TMAP) technique, which offers swift access to the condylar neck while substantially reducing the risk to the facial nerve and eliminating the complications associated with transparotid approaches.

  14. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    NASA Astrophysics Data System (ADS)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  15. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  16. Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-11-01

    We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.

  17. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.

    PubMed

    Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A

    2007-01-01

    CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.

  18. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Turn-Projected State-Based Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  20. Three-dimensional volume containing multiple two-dimensional information patterns

    NASA Astrophysics Data System (ADS)

    Nakayama, Hirotaka; Shiraki, Atsushi; Hirayama, Ryuji; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2013-06-01

    We have developed an algorithm for recording multiple gradated two-dimensional projection patterns in a single three-dimensional object. When a single pattern is observed, information from the other patterns can be treated as background noise. The proposed algorithm has two important features: the number of patterns that can be recorded is theoretically infinite and no meaningful information can be seen outside of the projection directions. We confirmed the effectiveness of the proposed algorithm by performing numerical simulations of two laser crystals: an octagonal prism that contained four patterns in four projection directions and a dodecahedron that contained six patterns in six directions. We also fabricated and demonstrated an actual prototype laser crystal from a glass cube engraved by a laser beam. This algorithm has applications in various fields, including media art, digital signage, and encryption technology.

  1. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  2. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  3. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. Fraction Reduction through Continued Fractions

    ERIC Educational Resources Information Center

    Carley, Holly

    2011-01-01

    This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.

  5. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  6. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143

  7. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  8. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  9. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  10. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  11. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  12. Reconstruction of internal density distributions in porous bodies from laser ultrasonic data

    NASA Technical Reports Server (NTRS)

    Lu, Yichi; Goldman, Jeffrey A.; Wadley, Haydn N. G.

    1992-01-01

    It is presently shown that, for density-reconstruction problems in which information about the inhomogeneity is known a priori, the nonlinear least-squares algorithm yields satisfactory results on the basis of limited projection data. The back-projection algorithm, which obviates assumptions about the objective function to be reconstructed, does not recover the boundary of the inhomogeneity when the number of projections is limited and ray-bending is ignored.

  13. Construction project selection with the use of fuzzy preference relation

    NASA Astrophysics Data System (ADS)

    Ibadov, Nabi

    2016-06-01

    In the article, author describes the problem of the construction project variant selection during pre-investment phase. As a solution, the algorithm basing on fuzzy preference relation is presented. The article provides an example of the algorithm used for selection of the best variant for construction project. The choice is made basing on criteria such as: net present value (NPV), level of technological difficulty, financing possibilities, and level of organizational difficulty.

  14. The impact of oxytocin administration on brain activity: a systematic review and meta-analysis protocol.

    PubMed

    Quintana, Daniel S; Outhred, Tim; Westlye, Lars T; Malhi, Gin S; Andreassen, Ole A

    2016-11-29

    Converging evidence demonstrates the important role of the neuropeptide hormone oxytocin (OT) in human behaviour and cognition. Intranasal OT administration has been shown to improve several aspects of social communication, such as the theory of mind performance and gaze to the eye region, and reduce anxiety and related negative cognitive appraisals. While this early research has demonstrated the potential for intranasal OT to treat psychiatric illnesses characterized by social impairments, the neurobiological mechanisms are not well known. Researchers have used functional magnetic resonance imaging (fMRI) to examine the neural correlates of OT response; however, results have been variable and moderating factors are poorly understood. The aim of this meta-analysis is to synthesize data examining the impact of intranasal OT administration on neural activity. Studies that report fMRI data after intranasal OT administration will be identified. PubMed, Embase, PsycINFO, and Google Scholar databases will be searched as well as the citation lists of retrieved articles. Eligible articles written in English from 2005 onwards will be included in the meta-analysis, and corresponding authors of these papers will be invited to contribute t-maps. Data will be collected from eligible studies for synthesis using Seed-based d Mapping (SDM) or Multi-Level Kernel Density Analysis (MKDA), depending on the number of usable t-maps received. Additionally, publication bias and risk of bias will be assessed. This systematic review and meta-analysis will be the first pre-registered synthesis of data to identify the neural correlates of OT nasal spray response. The identification of brain regions underlying OT's observed effects will help guide future research and better identify treatment targets. PROSPERO CRD42016038781.

  15. Preauricular transmasseteric anteroparotid approach for extracorporeal fixation of mandibular condyle fractures.

    PubMed

    Gali, Rajasekhar; Devireddy, Sathya Kumar; Venkata, Kishore Kumar Rayadurgam; Kanubaddy, Sridhar Reddy; Nemaly, Chaithanyaa; Dasari, Mallikarjuna

    2016-01-01

    Free grafting or extracorporeal fixation of traumatically displaced mandibular condyles is sometimes required in patients with severe anteromedial displacement of condylar head. Majority of the published studies report the use of a submandibular, retromandibular or preauricular incisions for the access which have demerits of limited visibility, access and potential to cause damage to facial nerve and other parotid gland related complications. This retrospective clinical case record study was done to evaluate the preauricular transmasseteric anteroparotid (P-TMAP) approach for open reduction and extracorporeal fixation of displaced and dislocated high condylar fractures of the mandible. This retrospective study involved search of clinical case records of seven patients with displaced and dislocated high condylar fractures treated by open reduction and extracorporeal fixation over a 3-year period. The parameters assessed were as follows: a) the ease of access for retrieval, reimplantation and fixation of the proximal segment; b) the postoperative approach related complications; c) the adequacy of anatomical reduction and stability of fixation; d) the occlusal changes; and the e) TMJ function and radiological changes. Accessibility and visibility were good. Accurate anatomical reduction and fixation were achieved in all the patients. The recorded complications were minimal and transient. Facial nerve (buccal branch) palsy was noted in one patient with spontaneous resolution within 3 months. No cases of sialocele or Frey's syndrome were seen. The P-TMAP approach provides good access for open reduction and extracorporeal fixation of severely displaced condylar fractures. It facilitates retrieval, transplantation, repositioning, fixing the condyle and also reduces the chances of requirement of a vertical ramus osteotomy. It gives straight-line access to condylar head and ramus thereby permitting perpendicular placement of screws with minimal risk of damage to the facial nerve.

  16. Vibrational and electronic circular dichroism study of the interactions of cationic porphyrins with (dG-dC)10 and (dA-dT)10.

    PubMed

    Nový, Jakub; Urbanová, Marie

    2007-03-01

    The interactions of two different porphyrins, without axial ligands-5,10,15,20-tetrakis(1-methylpyridinium-4-yl)porphyrin-Cu(II) tetrachloride (Cu(II)TMPyP) and with bulky meso substituents-5,10,15,20-tetrakis(N,N,N-trimethylanilinium-4-yl)porphyrin tetrachloride (TMAP), with (dG-dC)10 and (dA-dT)10 were studied by combination of vibrational circular dichroism (VCD) and electronic circular dichroism (ECD) spectroscopy at different [oligonucleotide]/[porphyrin] ratios, where [oligonucleotide] and [porphyrin] are the concentrations of oligonucleotide per base-pair and porphyrin, respectively. The combination of VCD and ECD spectroscopy enables us to identify the types of interactions, and to specify the sites of interactions: The intercalative binding mode of Cu(II)TMPyP with (dG-dC)(10), which has been well described, was characterized by a new VCD "marker" and it was shown that the interaction of Cu(II)TMPyP with (dA-dT)10 via external binding to the phosphate backbone and major groove binding caused transition from the B to the non-B conformer. TMAP interacted with the major groove of (dG-dC)10, was semi-intercalated into (dA-dT)10, and caused significant variation in the structure of both oligonucleotides at the higher concentration of porphyrin. The spectroscopic techniques used in this study revealed that porphyrin binding with AT sequences caused substantial variation of the DNA structure. It was shown that VCD spectroscopy is an effective tool for the conformational studies of nucleic acid-porphyrin complexes in solution. (c) 2007 Wiley Periodicals, Inc.

  17. Preauricular transmasseteric anteroparotid approach for extracorporeal fixation of mandibular condyle fractures

    PubMed Central

    Gali, Rajasekhar; Devireddy, Sathya Kumar; Venkata, Kishore Kumar Rayadurgam; Kanubaddy, Sridhar Reddy; Nemaly, Chaithanyaa; Dasari, Mallikarjuna

    2016-01-01

    Introduction: Free grafting or extracorporeal fixation of traumatically displaced mandibular condyles is sometimes required in patients with severe anteromedial displacement of condylar head. Majority of the published studies report the use of a submandibular, retromandibular or preauricular incisions for the access which have demerits of limited visibility, access and potential to cause damage to facial nerve and other parotid gland related complications. Purpose: This retrospective clinical case record study was done to evaluate the preauricular transmasseteric anteroparotid (P-TMAP) approach for open reduction and extracorporeal fixation of displaced and dislocated high condylar fractures of the mandible. Patients and Methods: This retrospective study involved search of clinical case records of seven patients with displaced and dislocated high condylar fractures treated by open reduction and extracorporeal fixation over a 3-year period. The parameters assessed were as follows: a) the ease of access for retrieval, reimplantation and fixation of the proximal segment; b) the postoperative approach related complications; c) the adequacy of anatomical reduction and stability of fixation; d) the occlusal changes; and the e) TMJ function and radiological changes. Results: Accessibility and visibility were good. Accurate anatomical reduction and fixation were achieved in all the patients. The recorded complications were minimal and transient. Facial nerve (buccal branch) palsy was noted in one patient with spontaneous resolution within 3 months. No cases of sialocele or Frey's syndrome were seen. Conclusion: The P-TMAP approach provides good access for open reduction and extracorporeal fixation of severely displaced condylar fractures. It facilitates retrieval, transplantation, repositioning, fixing the condyle and also reduces the chances of requirement of a vertical ramus osteotomy. It gives straight-line access to condylar head and ramus thereby permitting perpendicular placement of screws with minimal risk of damage to the facial nerve. PMID:27274123

  18. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  19. Investigation of frame-to-frame back projection and feature selection algorithms for non-line-of-sight laser gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-10-01

    In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.

  20. Refraction law and Fermat principle: a project using the ant colony optimization algorithm for undergraduate students in physics

    NASA Astrophysics Data System (ADS)

    Vuong, Q. L.; Rigaut, C.; Gossuin, Y.

    2018-07-01

    A programming project for undergraduate students in physics is proposed in this work. Its goal is to check the Snell–Descartes law of refraction using the Fermat principle and the ant colony optimization algorithm. The project involves basic mathematics and physics and is adapted to students with basic programming skills. More advanced tools can be used (but are not mandatory) as parallelization or object-oriented programming, which makes the project also suitable for more experienced students. We propose two tests to validate the program. Our algorithm is able to find solutions which are close to the theoretical predictions. Two quantities are defined to study its convergence and the quality of the solutions. It is also shown that the choice of the values of the simulation parameters is important to efficiently obtain precise results.

  1. Information mining in weighted complex networks with nonlinear rating projection

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhou, Mingyang; Mao, Rui; Wang, Bing-Hong

    2017-10-01

    Weighted rating networks are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the complex networks domain. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the nonlinear rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between the different given values is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used.

  2. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  3. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  4. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  5. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  6. A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz

    2012-09-10

    Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.

  7. A New Method of Synthetic Aperture Radar Image Reconstruction Using Modified Convolution Back-Projection Algorithm.

    DTIC Science & Technology

    1986-08-01

    SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTIONAVAILABILITY OF REPORT N/A \\pproved for public release, 21b. OECLASS FI) CAT ) ON/OOWNGRAOING SCMEOLLE...from this set of projections. The Convolution Back-Projection (CBP) algorithm is widely used technique in Computer Aide Tomography ( CAT ). In this work...University of Illinois at Urbana-Champaign. 1985 Ac % DTICEl_ FCTE " AUG 1 11986 Urbana. Illinois U,) I A NEW METHOD OF SYNTHETIC APERTURE RADAR IMAGE

  8. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  9. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  10. Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm

    NASA Astrophysics Data System (ADS)

    Cho, Seungryong; Pan, Xiaochuan

    2007-03-01

    We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.

  11. The importance of ray pathlengths when measuring objects in maximum intensity projection images.

    PubMed

    Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L

    1996-01-01

    It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.

  12. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  13. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  14. A Novel Latin Hypercube Algorithm via Translational Propagation

    PubMed Central

    Pan, Guang; Ye, Pengcheng

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844

  15. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  16. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  17. Model based LV-reconstruction in bi-plane x-ray angiography

    NASA Astrophysics Data System (ADS)

    Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz

    2005-04-01

    Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.

  18. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  19. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  20. Ion-driven deuterium permeation through tungsten at high temperatures

    NASA Astrophysics Data System (ADS)

    Gasparyan, Yu. M.; Golubeva, A. V.; Mayer, M.; Pisarev, A. A.; Roth, J.

    2009-06-01

    The ion-driven permeation (IDP) through 50 μm thick pure tungsten foils was measured in the temperature range of 823-923 K during irradiation by 200 eV/D + ion beam with a flux of 10 17-10 18 D/m 2s. Gas driven permeation (GDP) from the deuterium background gas was observed as well. Calculations using both the analytical formula for the diffusion limited regime (DLR) and the TMAP 7 code gave good agreement with the experimental data. Defects with a detrapping energy of (2.05 ± 0.15) eV were found to limit the permeation lag time in our experimental conditions.

  1. A statistical method (cross-validation) for bone loss region detection after spaceflight

    PubMed Central

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  2. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  3. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less

  4. Development of a Tool for an Efficient Calibration of CORSIM Models

    DOT National Transportation Integrated Search

    2014-08-01

    This project proposes a Memetic Algorithm (MA) for the calibration of microscopic traffic flow simulation models. The proposed MA includes a combination of genetic and simulated annealing algorithms. The genetic algorithm performs the exploration of ...

  5. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  6. Viewing-zone control of integral imaging display using a directional projection and elemental image resizing method.

    PubMed

    Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam

    2013-10-01

    Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.

  7. Optimizing 4DCBCT projection allocation to respiratory bins.

    PubMed

    O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J

    2014-10-07

    4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.

  8. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.

    PubMed

    Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  9. Software for Project-Based Learning of Robot Motion Planning

    ERIC Educational Resources Information Center

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-01-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…

  10. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  11. Research on cross - Project software defect prediction based on transfer learning

    NASA Astrophysics Data System (ADS)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  12. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  13. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    PubMed

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  14. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  15. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-01-01

    To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 degrees demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.

  16. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2011-01-15

    Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations weremore » performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 deg. demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.« less

  17. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements.

    PubMed

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S

    2018-06-01

    Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  18. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  19. Short-Scan Fan-Beam Algorithms for Cr

    NASA Astrophysics Data System (ADS)

    Naparstek, Abraham

    1980-06-01

    Several short-scan reconstruction algorithms of the convolution type for fan-beam projections are presented and discussed. Their derivation fran new, exact integral representation formulas is outlined, and the performance of same of these algorithms is demonstrated with the aid of simulation results.

  20. Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces

    DTIC Science & Technology

    2011-02-28

    Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical

  1. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  4. ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms

    DOT National Transportation Integrated Search

    2001-04-26

    The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...

  5. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing

    PubMed Central

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-01-01

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855

  6. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less

  7. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    PubMed

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  8. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  9. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    PubMed Central

    Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018

  10. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  11. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  12. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  13. Information technologies for taking into account risks in business development programme

    NASA Astrophysics Data System (ADS)

    Kalach, A. V.; Khasianov, R. R.; Rossikhina, L. V.; Zybin, D. G.; Melnik, A. A.

    2018-05-01

    The paper describes the information technologies for taking into account risks in business development programme, which rely on the algorithm for assessment of programme project risks and the algorithm of programme forming with constrained financing of high-risk projects taken into account. A method of lower-bound estimate is suggested for subsets of solutions. The corresponding theorem and lemma and their proofs are given.

  14. Non-Algorithmic Issues in Automated Computational Mechanics

    DTIC Science & Technology

    1991-04-30

    Tworzydlo, Senior Research Engineer and Manager of Advanced Projects Group I. Professor I J. T. Oden, President and Senior Scientist of COMCO, was project...practical applications of the systems reported so far is due to the extremely arduous and complex development and management of a realistic knowledge base...software, designed to effectively implement deep, algorithmic knowledge, * and 0 "intelligent" software, designed to manage shallow, heuristic

  15. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  16. Theory and Applications of Computational Time-Reversal Imaging

    DTIC Science & Technology

    2007-05-03

    experimental data collected by a research team from Carnegie Mellon University illustrating the use of the algorithms developed in the project. The final...2.1 Early Results from CMU experimental data ..... ................... 4 2.1.1 Basic Time Reversal Imaging ....... ...................... 4 2.1.2 Time... experimental data collected by Carnegie Mellon University illustrating the use of the algorithms developed in the project. 15. SUBJECT TERMS 16. SECURITY

  17. Litigated Metal Clusters - Structures, Energy and Reactivity

    DTIC Science & Technology

    2016-04-01

    projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate

  18. Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization

    DTIC Science & Technology

    2010-03-31

    optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested

  19. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  20. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  1. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  2. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  3. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  4. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  5. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  6. Volumetric display containing multiple two-dimensional color motion pictures

    NASA Astrophysics Data System (ADS)

    Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.

    2014-06-01

    We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.

  7. CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining

    NASA Astrophysics Data System (ADS)

    Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei

    2015-10-01

    The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.

  8. Analysis of a new phase and height algorithm in phase measurement profilometry

    NASA Astrophysics Data System (ADS)

    Bian, Xintian; Zuo, Fen; Cheng, Ju

    2018-04-01

    Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.

  9. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  10. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  11. Research On Vehicle-Based Driver Status/Performance Monitoring; Development, Validation, And Refinement Of Algorithms For Detection Of Driver Drowsiness, Final Report

    DOT National Transportation Integrated Search

    1994-12-01

    THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...

  12. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  13. Tritium permeation model for plasma facing components

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  15. Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games

    DTIC Science & Technology

    2016-05-01

    Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games Yat Tin...subproblems. Our approach is expected to have wide applications in continuous dynamic games , control theory problems, and elsewhere. Mathematics...differential dynamic games , control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to

  16. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  17. A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction

    PubMed Central

    Fu, Fang; Zhang, Tao

    2016-01-01

    A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939

  18. Marginal semi-supervised sub-manifold projections with informative constraints for dimensionality reduction and recognition.

    PubMed

    Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S

    2012-12-01

    In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Measurement technique for in situ characterizing aberrations of projection optics in lithographic tools.

    PubMed

    Wang, Fan; Wang, Xiangzhao; Ma, Mingying

    2006-08-20

    As the feature size decreases, degradation of image quality caused by wavefront aberrations of projection optics in lithographic tools has become a serious problem in the low-k1 process. We propose a novel measurement technique for in situ characterizing aberrations of projection optics in lithographic tools. Considering the impact of the partial coherence illumination, we introduce a novel algorithm that accurately describes the pattern displacement and focus shift induced by aberrations. Employing the algorithm, the measurement condition is extended from three-beam interference to two-, three-, and hybrid-beam interferences. The experiments are performed to measure the aberrations of projection optics in an ArF scanner.

  20. Evaluating ACLS Algorithms for the International Space Station (ISS) - A Paradigm Revisited

    NASA Technical Reports Server (NTRS)

    Alexander, Dave; Brandt, Keith; Locke, James; Hurst, Victor, IV; Mack, Michael D.; Pettys, Marianne; Smart, Kieran

    2007-01-01

    The ISS may have communication gaps of up to 45 minutes during each orbit and therefore it is imperative to have medical protocols, including an effective ACLS algorithm, that can be reliably autonomously executed during flight. The aim of this project was to compare the effectiveness of the current ACLS algorithm with an improved algorithm having a new navigation format.

  1. Cerebral 18F-FDG PET in macrophagic myofasciitis: An individual SVM-based approach.

    PubMed

    Blanc-Durand, Paul; Van Der Gucht, Axel; Guedj, Eric; Abulizi, Mukedaisi; Aoun-Sebaiti, Mehdi; Lerman, Lionel; Verger, Antoine; Authier, François-Jérôme; Itti, Emmanuel

    2017-01-01

    Macrophagic myofasciitis (MMF) is an emerging condition with highly specific myopathological alterations. A peculiar spatial pattern of a cerebral glucose hypometabolism involving occipito-temporal cortex and cerebellum have been reported in patients with MMF; however, the full pattern is not systematically present in routine interpretation of scans, and with varying degrees of severity depending on the cognitive profile of patients. Aim was to generate and evaluate a support vector machine (SVM) procedure to classify patients between healthy or MMF 18F-FDG brain profiles. 18F-FDG PET brain images of 119 patients with MMF and 64 healthy subjects were retrospectively analyzed. The whole-population was divided into two groups; a training set (100 MMF, 44 healthy subjects) and a testing set (19 MMF, 20 healthy subjects). Dimensionality reduction was performed using a t-map from statistical parametric mapping (SPM) and a SVM with a linear kernel was trained on the training set. To evaluate the performance of the SVM classifier, values of sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV) and accuracy (Acc) were calculated. The SPM12 analysis on the training set exhibited the already reported hypometabolism pattern involving occipito-temporal and fronto-parietal cortices, limbic system and cerebellum. The SVM procedure, based on the t-test mask generated from the training set, correctly classified MMF patients of the testing set with following Se, Sp, PPV, NPV and Acc: 89%, 85%, 85%, 89%, and 87%. We developed an original and individual approach including a SVM to classify patients between healthy or MMF metabolic brain profiles using 18F-FDG-PET. Machine learning algorithms are promising for computer-aided diagnosis but will need further validation in prospective cohorts.

  2. Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.

    PubMed

    Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2017-01-01

    Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.

  3. The McGill Interactive Pediatric OncoGenetic Guidelines: An approach to identifying pediatric oncology patients most likely to benefit from a genetic evaluation.

    PubMed

    Goudie, Catherine; Coltin, Hallie; Witkowski, Leora; Mourad, Stephanie; Malkin, David; Foulkes, William D

    2017-08-01

    Identifying cancer predisposition syndromes in children with tumors is crucial, yet few clinical guidelines exist to identify children at high risk of having germline mutations. The McGill Interactive Pediatric OncoGenetic Guidelines project aims to create a validated pediatric guideline in the form of a smartphone/tablet application using algorithms to process clinical data and help determine whether to refer a child for genetic assessment. This paper discusses the initial stages of the project, focusing on its overall structure, the methodology underpinning the algorithms, and the upcoming algorithm validation process. © 2017 Wiley Periodicals, Inc.

  4. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  5. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  6. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  7. Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seedahmed, Gamal H.

    2006-09-01

    Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less

  8. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  9. Performance and policy dimensions in internet routing

    NASA Technical Reports Server (NTRS)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit

    1995-01-01

    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.

  10. Real Time Intelligent Target Detection and Analysis with Machine Vision

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Padgett, Curtis; Brown, Kenneth

    2000-01-01

    We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.

  11. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less

  12. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    NASA Astrophysics Data System (ADS)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  13. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  14. The ESA Cloud CCI project: Generation of Multi Sensor consistent Cloud Properties with an Optimal Estimation Based Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Jerg, M.; Stengel, M.; Hollmann, R.; Poulsen, C.

    2012-04-01

    The ultimate objective of the ESA Climate Change Initiative (CCI) Cloud project is to provide long-term coherent cloud property data sets exploiting and improving on the synergetic capabilities of past, existing, and upcoming European and American satellite missions. The synergetic approach allows not only for improved accuracy and extended temporal and spatial sampling of retrieved cloud properties better than those provided by single instruments alone but potentially also for improved (inter-)calibration and enhanced homogeneity and stability of the derived time series. Such advances are required by the scientific community to facilitate further progress in satellite-based climate monitoring, which leads to a better understanding of climate. Some of the primary objectives of ESA Cloud CCI Cloud are (1) the development of inter-calibrated radiance data sets, so called Fundamental Climate Data Records - for ESA and non ESA instruments through an international collaboration, (2) the development of an optimal estimation based retrieval framework for cloud related essential climate variables like cloud cover, cloud top height and temperature, liquid and ice water path, and (3) the development of two multi-annual global data sets for the mentioned cloud properties including uncertainty estimates. These two data sets are characterized by different combinations of satellite systems: the AVHRR heritage product comprising (A)ATSR, AVHRR and MODIS and the novel (A)ATSR - MERIS product which is based on a synergetic retrieval using both instruments. Both datasets cover the years 2007-2009 in the first project phase. ESA Cloud CCI will also carry out a comprehensive validation of the cloud property products and provide a common data base as in the framework of the Global Energy and Water Cycle Experiment (GEWEX). The presentation will give an overview of the ESA Cloud CCI project and its goals and approaches and then continue with results from the Round Robin algorithm comparison exercise carried out at the beginning of the project which included three algorithms. The purpose of the exercise was to assess and compare existing cloud retrieval algorithms in order to chose one of them as backbone of the retrieval system and also identify areas of potential improvement and general strengths and weaknesses of the algorithm. Furthermore the presentation will elaborate on the optimal estimation algorithm subsequently chosen to derive the heritage product and which is presently further developed and will be employed for the AVHRR heritage product. The algorithm's capabilities to coherently and simultaneously process all radiative input and yield retrieval parameters together with associated uncertainty estimates will be presented together with first results for the heritage product. In the course of the project the algorithm is being developed into a freely and publicly available community retrieval system for interested scientists.

  15. A new pivoting and iterative text detection algorithm for biomedical images.

    PubMed

    Xu, Songhua; Krauthammer, Michael

    2010-12-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. On-Line Point Positioning with Single Frame Camera Data

    DTIC Science & Technology

    1992-03-15

    tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile

  18. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  19. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    PubMed

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  20. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics.

    PubMed

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid.

  1. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics

    PubMed Central

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid. PMID:26840129

  2. Data-Parallel Algorithm for Contour Tree Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer; Ahrens, James Paul; Carr, Hamish

    2017-01-19

    The goal of this project is to develop algorithms for additional visualization and analysis filters in order to expand the functionality of the VTK-m toolkit to support less critical but commonly used operators.

  3. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  5. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  6. ODTbrain: a Python library for full-view, dense diffraction tomography.

    PubMed

    Müller, Paul; Schürmann, Mirjam; Guck, Jochen

    2015-11-04

    Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.

  7. Modeling and new equipment definition for the vibration isolation box equipment system

    NASA Technical Reports Server (NTRS)

    Sani, Robert L.

    1993-01-01

    Our MSAD-funded research project is to provide numerical modeling support for the VIBES (Vibration Isolation Box Experiment System) which is an IML2 flight experiment being built by the Japanese research team of Dr. H. Azuma of the Japanese National Aerospace Laboratory. During this reporting period, the following have been accomplished: A semi-consistent mass finite element projection algorithm for 2D and 3D Boussinesq flows has been implemented on Sun, HP And Cray Platforms. The algorithm has better phase speed accuracy than similar finite difference or lumped mass finite element algorithms, an attribute which is essential for addressing realistic g-jitter effects as well as convectively-dominated transient systems. The projection algorithm has been benchmarked against solutions generated via the commercial code FIDAP. The algorithm appears to be accurate as well as computationally efficient. Optimization and potential parallelization studies are underway. Our implementation to date has focused on execution of the basic algorithm with at most a concern for vectorization. The initial time-varying gravity Boussinesq flow simulation is being set up. The mesh is being designed and the input file is being generated. Some preliminary 'small mesh' cases will be attempted on our HP9000/735 while our request to MSAD for supercomputing resources is being addressed. The Japanese research team for VIBES was visited, the current set up and status of the physical experiment was obtained and ongoing E-Mail communication link was established.

  8. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    NASA Astrophysics Data System (ADS)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  9. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  10. Methodological aspects of crossover and maximum fat-oxidation rate point determination.

    PubMed

    Michallet, A-S; Tonini, J; Regnier, J; Guinot, M; Favre-Juvin, A; Bricout, V; Halimi, S; Wuyam, B; Flore, P

    2008-11-01

    Indirect calorimetry during exercise provides two metabolic indices of substrate oxidation balance: the crossover point (COP) and maximum fat oxidation rate (LIPOXmax). We aimed to study the effects of the analytical device, protocol type and ventilatory response on variability of these indices, and the relationship with lactate and ventilation thresholds. After maximum exercise testing, 14 relatively fit subjects (aged 32+/-10 years; nine men, five women) performed three submaximum graded tests: one was based on a theoretical maximum power (tMAP) reference; and two were based on the true maximum aerobic power (MAP). Gas exchange was measured concomitantly using a Douglas bag (D) and an ergospirometer (E). All metabolic indices were interpretable only when obtained by the D reference method and MAP protocol. Bland and Altman analysis showed overestimation of both indices with E versus D. Despite no mean differences between COP and LIPOXmax whether tMAP or MAP was used, the individual data clearly showed disagreement between the two protocols. Ventilation explained 10-16% of the metabolic index variations. COP was correlated with ventilation (r=0.96, P<0.01) and the rate of increase in blood lactate (r=0.79, P<0.01), and LIPOXmax correlated with the ventilation threshold (r=0.95, P<0.01). This study shows that, in fit healthy subjects, the analytical device, reference used to build the protocol and ventilation responses affect metabolic indices. In this population, and particularly to obtain interpretable metabolic indices, we recommend a protocol based on the true MAP or one adapted to include the transition from fat to carbohydrate. The correlation between metabolic indices and lactate/ventilation thresholds suggests that shorter, classical maximum progressive exercise testing may be an alternative means of estimating these indices in relatively fit subjects. However, this needs to be confirmed in patients who have metabolic defects.

  11. The effect of polymer size and charge of molecules on permeation through synovial membrane and accumulation in hyaline articular cartilage.

    PubMed

    Sterner, B; Harms, M; Wöll, S; Weigandt, M; Windbergs, M; Lehr, C M

    2016-04-01

    The treatment of joint related diseases often involves direct intra-articular injections. For rational development of novel delivery systems with extended residence time in the joint, detailed understanding of transport and retention phenomena within the joint is mandatory. This work presents a systematic study on the in vitro permeation, penetration and accumulation of model polymers with differing charges and molecular weights in bovine joint tissue. Permeation experiments with bovine synovial membrane were performed with PEG polymers (6-200 kDa) and methylene blue in customized diffusion chambers. For polyethylene glycol, 2-fold (PEG 6 kDa), 3-fold (PEG 10 kDa) and 13-fold (PEG 35 kDa) retention by the synovial membrane in reference to the small molecule methylene blue was demonstrated. No PEG 200 kDa was found in the acceptor in detectable amounts after 48 h. This showed the potential for a distinct extension of joint residence times by increasing molecular weights. In addition, experiments with bovine cartilage tissue were conducted. The ability for positively charged, high molecular weight chitosans and HEMA-Co-TMAP (HCT) polymers (up to 233 kDa) to distribute throughout the entire cartilage matrix was demonstrated. In contrast, a distribution into cartilage was not observed for neutral PEG polymers (6-200 kDa). Furthermore, the positive charge density of different compounds (chitosan, HEMA-Co-TMAP, methylene blue, MSC C1 (neutral NCE) and MSC D1 (positively charged NCE) was found to correlate with their accumulation in bovine cartilage tissue. In summary, the results offer pre-clinical in vitro data, indicating that the modification of molecular size and charge of a substance has the potential to decelerate its clearance through the synovial membrane and to promote accumulation inside the cartilage matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Modelling deuterium release during thermal desorption of D +-irradiated tungsten

    NASA Astrophysics Data System (ADS)

    Poon, M.; Haasz, A. A.; Davis, J. W.

    2008-03-01

    Thermal desorption profiles were modelled based on SIMS measurements of implantation profiles and using the multi-trap diffusion code TMAP7 [G.R. Longhurst, TMAP7: Tritium Migration Analysis Program, User Manual, Idaho National Laboratory, INEEL/EXT-04-02352 (2004)]. The thermal desorption profiles were the result of 500 eV/D + irradiations on single crystal tungsten at 300 and 500 K to fluences of 10 22-10 24 D +/m 2. SIMS depth profiling was performed after irradiation to obtain the distribution of trapped D within the top 60 nm of the surface. Thermal desorption spectroscopy (TDS) was performed subsequently to obtain desorption profiles and to extract the total trapped D inventory. The SIMS profiles were calibrated to give D concentrations. To account for the total trapped D inventory measured by TDS, SIMS depth distributions were used in the near-surface (surface to 30 nm), NRA measurements [V.Kh. Alimov, J. Roth, M. Mayer, J. Nucl. Mater. 337-339 (2005) 619] were used in the range 1-7 μm, and a linear drop in the D distribution was assumed in the intermediate sub-surface region (˜30 nm to 1 μm). Traps were assumed to be saturated so that the D distribution also represented the trap distribution. Three trap energies, 1.07 ± 0.03, 1.34 ± 0.03 and 2.1 ± 0.05 eV were required to model the 520, 640 and 900 K desorption peaks, respectively. The 1.34 and 1.07 eV traps correspond to trapping of a first and second D atom at a vacancy, respectively, while the 2.1 eV trap corresponds to atomic D trapping at a void. A fourth trap energy of 0.65 eV was used to fit the 400 K desorption peak observed by Quastel et al. [A.D. Quastel, J.W. Davis, A.A. Haasz, R.G. Macaulay-Newcombe, J. Nucl. Mater. 359 (2006) 8].

  13. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  14. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  15. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  16. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  17. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  18. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    PubMed

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  19. MPL-Net data products available at co-located AERONET sites and field experiment locations

    NASA Astrophysics Data System (ADS)

    Welton, E. J.; Campbell, J. R.; Berkoff, T. A.

    2002-05-01

    Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.

  20. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  1. DNA algorithms of implementing biomolecular databases on a biological computer.

    PubMed

    Chang, Weng-Long; Vasilakos, Athanasios V

    2015-01-01

    In this paper, DNA algorithms are proposed to perform eight operations of relational algebra (calculus), which include Cartesian product, union, set difference, selection, projection, intersection, join, and division, on biomolecular relational databases.

  2. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST software, with new detection, filtering and classification algorithms. Particularly, dedicated filtering algorithm development based on Wavelet filtering was exploited for the improvement of oil spill detection and classification. In this work we present the functionalities of the developed software and the main results in support of the developed algorithm validity.

  3. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  4. Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT

    DTIC Science & Technology

    2010-02-01

    source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak

  5. Algorithm of OMA for large-scale orthology inference

    PubMed Central

    Roth, Alexander CJ; Gonnet, Gaston H; Dessimoz, Christophe

    2008-01-01

    Background OMA is a project that aims to identify orthologs within publicly available, complete genomes. With 657 genomes analyzed to date, OMA is one of the largest projects of its kind. Results The algorithm of OMA improves upon standard bidirectional best-hit approach in several respects: it uses evolutionary distances instead of scores, considers distance inference uncertainty, includes many-to-many orthologous relations, and accounts for differential gene losses. Herein, we describe in detail the algorithm for inference of orthology and provide the rationale for parameter selection through multiple tests. Conclusion OMA contains several novel improvement ideas for orthology inference and provides a unique dataset of large-scale orthology assignments. PMID:19055798

  6. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve

    2012-01-01

    Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).

  7. Searching Information Sources in Networks

    DTIC Science & Technology

    2017-06-14

    SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND

  8. An algorithm for the split-feasibility problems with application to the split-equality problem.

    PubMed

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  9. Teaching Computation in Primary School without Traditional Written Algorithms

    ERIC Educational Resources Information Center

    Hartnett, Judy

    2015-01-01

    Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…

  10. An opposite view data replacement approach for reducing artifacts due to metallic dental objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston

    Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less

  11. Topometry of technical and biological objects by fringe projection

    NASA Astrophysics Data System (ADS)

    Windecker, R.; Tiziani, H. J.

    1995-07-01

    Fringe projection is a fast and accurate technique for obtaining the topometry of a wide range of surfaces. Here some features of the principle are described, together with the possibilities of adapting this technique for the measurement of vaulted surfaces. We discuss various methods of phase evaluation and compare them with simulated computer data to obtain the resolution limits. Under certain restrictions a semispatial algorithm, called the modified Fourier analysis algorithm, gives the best results. One special subject of interest is the application of fringe projection for the measurement of the three-dimensional surface of the cornea. First results of in vivo measurements are presented.

  12. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  13. ProjectQ: Compiling quantum programs for various backends

    NASA Astrophysics Data System (ADS)

    Haener, Thomas; Steiger, Damian S.; Troyer, Matthias

    In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.

  14. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  15. Advanced CHP Control Algorithms: Scope Specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  16. Testing algorithms for a passenger train braking performance model.

    DOT National Transportation Integrated Search

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  17. Optimizing construction quality management of pavements using mechanistic performance analysis.

    DOT National Transportation Integrated Search

    2004-08-01

    This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...

  18. A homotopy algorithm for digital optimal projection control GASD-HADOC

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  19. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  20. Importing statistical measures into Artemis enhances gene identification in the Leishmania genome project.

    PubMed

    Aggarwal, Gautam; Worthey, E A; McDonagh, Paul D; Myler, Peter J

    2003-06-07

    Seattle Biomedical Research Institute (SBRI) as part of the Leishmania Genome Network (LGN) is sequencing chromosomes of the trypanosomatid protozoan species Leishmania major. At SBRI, chromosomal sequence is annotated using a combination of trained and untrained non-consensus gene-prediction algorithms with ARTEMIS, an annotation platform with rich and user-friendly interfaces. Here we describe a methodology used to import results from three different protein-coding gene-prediction algorithms (GLIMMER, TESTCODE and GENESCAN) into the ARTEMIS sequence viewer and annotation tool. Comparison of these methods, along with the CODONUSAGE algorithm built into ARTEMIS, shows the importance of combining methods to more accurately annotate the L. major genomic sequence. An improvised and powerful tool for gene prediction has been developed by importing data from widely-used algorithms into an existing annotation platform. This approach is especially fruitful in the Leishmania genome project where there is large proportion of novel genes requiring manual annotation.

  1. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  2. Sun-Relative Pointing for Dual-Axis Solar Trackers Employing Azimuth and Elevation Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Daniel; Hansen, Clifford W.

    Dual axis trackers employing azimuth and elevation rotations are common in the field of photovoltaic (PV) energy generation. Accurate sun-tracking algorithms are widely available. However, a steering algorithm has not been available to accurately point the tracker away from the sun such that a vector projection of the sun beam onto the tracker face falls along a desired path relative to the tracker face. We have developed an algorithm which produces the appropriate azimuth and elevation angles for a dual axis tracker when given the sun position, desired angle of incidence, and the desired projection of the sun beam ontomore » the tracker face. Development of this algorithm was inspired by the need to accurately steer a tracker to desired sun-relative positions in order to better characterize the electro-optical properties of PV and CPV modules.« less

  3. Intelligent Medical Systems for Aerospace Emergency Medical Services

    NASA Technical Reports Server (NTRS)

    Epler, John; Zimmer, Gary

    2004-01-01

    The purpose of this project is to develop a portable, hands free device for emergency medical decision support to be used in remote or confined settings by non-physician providers. Phase I of the project will entail the development of a voice-activated device that will utilize an intelligent algorithm to provide guidance in establishing an airway in an emergency situation. The interactive, hands free software will process requests for assistance based on verbal prompts and algorithmic decision-making. The device will allow the CMO to attend to the patient while receiving verbal instruction. The software will also feature graphic representations where it is felt helpful in aiding in procedures. We will also develop a training program to orient users to the algorithmic approach, the use of the hardware and specific procedural considerations. We will validate the efficacy of this mode of technology application by testing in the Johns Hopkins Department of Emergency Medicine. Phase I of the project will focus on the validation of the proposed algorithm, testing and validation of the decision making tool and modifications of medical equipment. In Phase 11, we will produce the first generation software for hands-free, interactive medical decision making for use in acute care environments.

  4. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. The DataBridge: A System For Optimizing The Use Of Dark Data From The Long Tail Of Science

    NASA Astrophysics Data System (ADS)

    Lander, H.; Rajasekar, A.

    2015-12-01

    The DataBridge is a National Science Foundation funded collaborative project (OCI-1247652, OCI-1247602, OCI-1247663) designed to assist in the discovery of dark data sets from the long tail of science. The DataBridge aims to to build queryable communities of datasets using sociometric network analysis. This approach is being tested to evaluate the ability to leverage various forms of metadata to facilitate discovery of new knowledge. Each dataset in the Databridge has an associated name space used as a first level partitioning. In addition to testing known algorithms for SNA community building, the DataBridge project has built a message-based platform that allows users to provide their own algorithms for each of the stages in the community building process. The stages are: Signature Generation (SG): An SG algorithm creates a metadata signature for a dataset. Signature algorithms might use text metadata provided by the dataset creator or derive metadata. Relevance Algorithm (RA): An RA compares a pair of datasets and produces a similarity value between 0 and 1 for the two datasets. Sociometric Network Analysis (SNA): The SNA will operate on a similarity matrix produced by an RA to partition all of the datasets in the name space into a set of clusters. These clusters represent communities of closely related datasets. The DataBridge also includes a web application that produces a visual representation of the clustering. Future work includes a more complete application that will allow different types of searching of the network of datasets. The DataBridge approach is relevant to geoscience research and informatics. In this presentation we will outline the project, illustrate the deployment of the approach, and discuss other potential applications and next steps for the research such as applying this approach to models. In addition we will explore the relevance of DataBridge to other geoscience projects such as various EarthCube Building Blocks and DIBBS projects.

  6. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.

  7. (F)UV Spectroscopy of K648: Abundance Determination of Trace Elements

    NASA Astrophysics Data System (ADS)

    Mohamad-Yob, S. J.; Ziegler, M.; Rauch, T.; Werner, K.

    2010-11-01

    We present preliminary results of an ongoing spectral analysis of K 648, the central star of the planetary nebula Ps 1, based on high resolution FUV spectra. K 648, in M 15 is one of only four known PNe in globular clusters. The formation of this post-AGB object in a globular cluster is still unclear. Our aim is to determine Teff, log g, and the abundances of trace elements, in order to improve our understanding of post-AGB evolution of extremely metal-poor stars, especially PN formation in globular clusters. We analyzed FUSE, HST/STIS, and HST/FOS observations. A grid of stellar model atmospheres was calculated using the Tübingen NLTE Model Atmosphere Package (TMAP).

  8. Genetic mapping in the presence of genotyping errors.

    PubMed

    Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-08-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.

  9. Genetic Mapping in the Presence of Genotyping Errors

    PubMed Central

    Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander

    2007-01-01

    Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374

  10. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.

    PubMed

    Schäfer, Dirk; Grass, Michael; van de Haar, Peter

    2011-07-01

    Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.

  11. Comparison of maximum intensity projection and digitally reconstructed radiographic projection for carotid artery stenosis measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyde, Derek E.; Habets, Damiaan F.; Fox, Allan J.

    2007-07-15

    Digital subtraction angiography is being supplanted by three-dimensional imaging techniques in many clinical applications, leading to extensive use of maximum intensity projection (MIP) images to depict volumetric vascular data. The MIP algorithm produces intensity profiles that are different than conventional angiograms, and can also increase the vessel-to-tissue contrast-to-noise ratio. We evaluated the effect of the MIP algorithm in a clinical application where quantitative vessel measurement is important: internal carotid artery stenosis grading. Three-dimensional computed rotational angiography (CRA) was performed on 26 consecutive symptomatic patients to verify an internal carotid artery stenosis originally found using duplex ultrasound. These volumes of datamore » were visualized using two different postprocessing projection techniques: MIP and digitally reconstructed radiographic (DRR) projection. A DRR is a radiographic image simulating a conventional digitally subtracted angiogram, but it is derived computationally from the same CRA dataset as the MIP. By visualizing a single volume with two different projection techniques, the postprocessing effect of the MIP algorithm is isolated. Vessel measurements were made, according to the NASCET guidelines, and percentage stenosis grades were calculated. The paired t-test was used to determine if the measurement difference between the two techniques was statistically significant. The CRA technique provided an isotropic voxel spacing of 0.38 mm. The MIPs and DRRs had a mean signal-difference-to-noise-ratio of 30:1 and 26:1, respectively. Vessel measurements from MIPs were, on average, 0.17 mm larger than those from DRRs (P<0.0001). The NASCET-type stenosis grades tended to be underestimated on average by 2.4% with the MIP algorithm, although this was not statistically significant (P=0.09). The mean interobserver variability (standard deviation) of both the MIP and DRR images was 0.35 mm. It was concluded that the MIP algorithm slightly increased the apparent dimensions of the arteries, when applied to these intra-arterial CRA images. This subpixel increase was smaller than both the voxel size and interobserver variability, and was therefore not clinically relevant.« less

  12. Using Radar, Lidar, and Radiometer measurements to Classify Cloud Type and Study Middle-Level Cloud Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    2010-06-29

    The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less

  13. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    PubMed

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  14. Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms

    DTIC Science & Technology

    2010-01-01

    Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing

  15. Investigation of the application of remote sensing technology to environmental monitoring

    NASA Technical Reports Server (NTRS)

    Rader, M. L. (Principal Investigator)

    1980-01-01

    Activities and results are reported of a project to investigate the application of remote sensing technology developed for the LACIE, AgRISTARS, Forestry and other NASA remote sensing projects for the environmental monitoring of strip mining, industrial pollution, and acid rain. Following a remote sensing workshop for EPA personnel, the EOD clustering algorithm CLASSY was selected for evaluation by EPA as a possible candidate technology. LANDSAT data acquired for a North Dakota test sight was clustered in order to compare CLASSY with other algorithms.

  16. Phase retrieval with Fourier-weighted projections.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-03-01

    In coherent lensless imaging, the presence of image sidelobes, which arise as a natural consequence of the finite nature of the detector array, was early recognized as a convergence issue for phase retrieval algorithms that rely on an object support constraint. To mitigate the problem of truncated far-field measurement, a controlled analytic continuation by means of an iterative transform algorithm with weighted projections is proposed and tested. This approach avoids the use of sidelobe reduction windows and achieves full-resolution reconstructions.

  17. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.

    PubMed

    Zhang, Man; Wang, Guanyong; Zhang, Lei

    2017-10-26

    Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  18. Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Science Goals and Preliminary Analysis

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Rutz, J. J.; Wehner, M. F.; Ralph, F. M.; Leung, L. R.

    2017-12-01

    The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is a community effort whose purpose is to quantify uncertainties in atmospheric river (AR) research solely due to different identification and tracking techniques. Atmospheric rivers transport significant amounts of moisture in long, narrow filamentary bands, typically travelling from the subtropics to the mid-latitudes. They are an important source of regional precipitation impacting local hydroclimate, and in extreme cases, cause severe flooding and infrastructure damage in local communities. Our understanding of ARs, from forecast skill to future climate projections, all hinge on how we define ARs. By comparing a diverse set of detection algorithms, the uncertainty in our definition of ARs, (including statistics and climatology), and the implications of those uncertainties, can be analyzed and quantified. ARTMIP is divided into two broad phases that aim to answer science questions impacted by choice of detection algorithm. How robust are AR metrics such as climatology, storm duration, and relationship to extreme precipitation? How are the AR metrics in future climate projections impacted by choice of algorithm? Some algorithms rely on threshold values for water vapor. In a warmer world, the background state, by definition, is moister due to the Clausius-Clapeyron relationship, and could potentially skew results. Can uncertainty bounds be accurately placed on each metric? Tier 1 participants will apply their algorithms to a high resolution common dataset (MERRA2) and provide the greater group AR metrics (frequency, location, duration, etc). Tier 2 research will encompass sensitivity studies regarding resolution, reanalysis choice, and future climate change scenarios. ARTMIP is currently in the Tier 1 Phase and will begin Tier 2 in 2018. Preliminary metrics and analysis from Tier 1 will be presented.

  19. i-rDNA: alignment-free algorithm for rapid in silico detection of ribosomal gene fragments from metagenomic sequence data sets.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Chadaram, Sudha; Mande, Sharmila S

    2011-11-30

    Obtaining accurate estimates of microbial diversity using rDNA profiling is the first step in most metagenomics projects. Consequently, most metagenomic projects spend considerable amounts of time, money and manpower for experimentally cloning, amplifying and sequencing the rDNA content in a metagenomic sample. In the second step, the entire genomic content of the metagenome is extracted, sequenced and analyzed. Since DNA sequences obtained in this second step also contain rDNA fragments, rapid in silico identification of these rDNA fragments would drastically reduce the cost, time and effort of current metagenomic projects by entirely bypassing the experimental steps of primer based rDNA amplification, cloning and sequencing. In this study, we present an algorithm called i-rDNA that can facilitate the rapid detection of 16S rDNA fragments from amongst millions of sequences in metagenomic data sets with high detection sensitivity. Performance evaluation with data sets/database variants simulating typical metagenomic scenarios indicates the significantly high detection sensitivity of i-rDNA. Moreover, i-rDNA can process a million sequences in less than an hour on a simple desktop with modest hardware specifications. In addition to the speed of execution, high sensitivity and low false positive rate, the utility of the algorithmic approach discussed in this paper is immense given that it would help in bypassing the entire experimental step of primer-based rDNA amplification, cloning and sequencing. Application of this algorithmic approach would thus drastically reduce the cost, time and human efforts invested in all metagenomic projects. A web-server for the i-rDNA algorithm is available at http://metagenomics.atc.tcs.com/i-rDNA/

  20. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less

  1. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    PubMed

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  2. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  3. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  4. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    PubMed Central

    Xu, Songhua; Krauthammer, Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803

  5. Projections for fast protein structure retrieval

    PubMed Central

    Bhattacharya, Sourangshu; Bhattacharyya, Chiranjib; Chandra, Nagasuma R

    2006-01-01

    Background In recent times, there has been an exponential rise in the number of protein structures in databases e.g. PDB. So, design of fast algorithms capable of querying such databases is becoming an increasingly important research issue. This paper reports an algorithm, motivated from spectral graph matching techniques, for retrieving protein structures similar to a query structure from a large protein structure database. Each protein structure is specified by the 3D coordinates of residues of the protein. The algorithm is based on a novel characterization of the residues, called projections, leading to a similarity measure between the residues of the two proteins. This measure is exploited to efficiently compute the optimal equivalences. Results Experimental results show that, the current algorithm outperforms the state of the art on benchmark datasets in terms of speed without losing accuracy. Search results on SCOP 95% nonredundant database, for fold similarity with 5 proteins from different SCOP classes show that the current method performs competitively with the standard algorithm CE. The algorithm is also capable of detecting non-topological similarities between two proteins which is not possible with most of the state of the art tools like Dali. PMID:17254310

  6. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    NASA Astrophysics Data System (ADS)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

  7. Materials Discovery | Materials Science | NREL

    Science.gov Websites

    measurement methods and specialized analysis algorithms. Projects Basic Research The basic research projects applications using high-throughput combinatorial research methods. Email | 303-384-6467 Photo of John Perkins

  8. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  9. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  10. Applications and development of communication models for the touchstone GAMMA and DELTA prototypes

    NASA Technical Reports Server (NTRS)

    Seidel, Steven R.

    1993-01-01

    The goal of this project was to develop models of the interconnection networks of the Intel iPSC/860 and DELTA multicomputers to guide the design of efficient algorithms for interprocessor communication in problems that commonly occur in CFD codes and other applications. Interprocessor communication costs of codes for message-passing architectures such as the iPSC/860 and DELTA significantly affect the level of performance that can be obtained from those machines. This project addressed several specific problems in the achievement of efficient communication on the Intel iPSC/860 hypercube and DELTA mesh. In particular, an efficient global processor synchronization algorithm was developed for the iPSC/860 and numerous broadcast algorithms were designed for the DELTA.

  11. Advanced digital SAR processing study

    NASA Technical Reports Server (NTRS)

    Martinson, L. W.; Gaffney, B. P.; Liu, B.; Perry, R. P.; Ruvin, A.

    1982-01-01

    A highly programmable, land based, real time synthetic aperture radar (SAR) processor requiring a processed pixel rate of 2.75 MHz or more in a four look system was designed. Variations in range and azimuth compression, number of looks, range swath, range migration and SR mode were specified. Alternative range and azimuth processing algorithms were examined in conjunction with projected integrated circuit, digital architecture, and software technologies. The advaced digital SAR processor (ADSP) employs an FFT convolver algorithm for both range and azimuth processing in a parallel architecture configuration. Algorithm performace comparisons, design system design, implementation tradeoffs and the results of a supporting survey of integrated circuit and digital architecture technologies are reported. Cost tradeoffs and projections with alternate implementation plans are presented.

  12. Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shi, Shuanghe; Bian, Junguo; Helm, Patrick; Sidky, Emil Y.; Pan, Xiaochuan

    2011-03-01

    Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.

  13. Discriminating Projections for Estimating Face Age in Wild Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokola, Ryan A; Bolme, David S; Ricanek, Karl

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-classmore » SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.« less

  14. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  15. Aerodynamic Optimization of a Supersonic Bending Body Projectile by a Vector-Evaluated Genetic Algorithm

    DTIC Science & Technology

    2016-12-01

    Evaluated Genetic Algorithm prepared by Justin L Paul Academy of Applied Science 24 Warren Street Concord, NH 03301 under contract W911SR...Supersonic Bending Body Projectile by a Vector-Evaluated Genetic Algorithm prepared by Justin L Paul Academy of Applied Science 24 Warren Street... Genetic Algorithm 5a. CONTRACT NUMBER W199SR-15-2-001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Justin L Paul 5d. PROJECT

  16. Jini service to reconstruct tomographic data

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.

    2002-06-01

    A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.

  17. Camera-pose estimation via projective Newton optimization on the manifold.

    PubMed

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  18. Measurements of axisymmetric temperature and H2O concentration distributions on a circular flat flame burner based on tunable diode laser absorption tomography

    NASA Astrophysics Data System (ADS)

    Xia, Huihui; Kan, Ruifeng; Xu, Zhenyu; Liu, Jianguo; He, Yabai; Yang, Chenguang; Chen, Bing; Wei, Min; Yao, Lu; Zhang, Guangle

    2016-10-01

    In this paper, the reconstruction of axisymmetric temperature and H2O concentration distributions in a flat flame burner is realized by tunable diode laser absorption spectroscopy (TDLAS) and filtered back-projection (FBP) algorithm. Two H2O absorption transitions (7154.354/7154.353 cm-1 and 7467.769 cm-1) are selected as line pair for temperature measurement, and time division multiplexing technology is adopted to scan this two H2O absorption transitions simultaneously at 1 kHz repetition rate. In the experiment, FBP algorithm can be used for reconstructing axisymmetric distributions of flow field parameters with only single view parallel-beam TDLAS measurements, and the same data sets from the given parallel beam are used for other virtual projection angles and beams scattered between 0° and 180°. The real-time online measurements of projection data, i.e., integrated absorbance both for pre-selected transitions on CH4/air flat flame burner are realized by Voigt on-line fitting, and the fitting residuals are less than 0.2%. By analyzing the projection data from different views based on FBP algorithm, the distributions of temperature and concentration along radial direction can be known instantly. The results demonstrate that the system and the proposed innovative FBP algorithm are capable for accurate reconstruction of axisymmetric temperature and H2O concentration distribution in combustion systems and facilities.

  19. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  20. ASC-ATDM Performance Portability Requirements for 2015-2019

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Harold C.; Trott, Christian Robert

    This report outlines the research, development, and support requirements for the Advanced Simulation and Computing (ASC ) Advanced Technology, Development, and Mitigation (ATDM) Performance Portability (a.k.a., Kokkos) project for 2015 - 2019 . The research and development (R&D) goal for Kokkos (v2) has been to create and demonstrate a thread - parallel programming model a nd standard C++ library - based implementation that enables performance portability across diverse manycore architectures such as multicore CPU, Intel Xeon Phi, and NVIDIA Kepler GPU. This R&D goal has been achieved for algorithms that use data parallel pat terns including parallel - for, parallelmore » - reduce, and parallel - scan. Current R&D is focusing on hierarchical parallel patterns such as a directed acyclic graph (DAG) of asynchronous tasks where each task contain s nested data parallel algorithms. This five y ear plan includes R&D required to f ully and performance portably exploit thread parallelism across current and anticipated next generation platforms (NGP). The Kokkos library is being evaluated by many projects exploring algorithm s and code design for NGP. Some production libraries and applications such as Trilinos and LAMMPS have already committed to Kokkos as their foundation for manycore parallelism an d performance portability. These five year requirements includes support required for current and antic ipated ASC projects to be effective and productive in their use of Kokkos on NGP. The greatest risk to the success of Kokkos and ASC projects relying upon Kokkos is a lack of staffing resources to support Kokkos to the degree needed by these ASC projects. This support includes up - to - date tutorials, documentation, multi - platform (hardware and software stack) testing, minor feature enhancements, thread - scalable algorithm consulting, and managing collaborative R&D.« less

  1. VHBuild.com: A Web-Based System for Managing Knowledge in Projects.

    ERIC Educational Resources Information Center

    Li, Heng; Tang, Sandy; Man, K. F.; Love, Peter E. D.

    2002-01-01

    Describes an intelligent Web-based construction project management system called VHBuild.com which integrates project management, knowledge management, and artificial intelligence technologies. Highlights include an information flow model; time-cost optimization based on genetic algorithms; rule-based drawing interpretation; and a case-based…

  2. Fast Algorithms for Estimating Mixture Parameters

    DTIC Science & Technology

    1989-08-30

    The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National

  3. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  4. Reduce beam hardening artifacts of polychromatic X-ray computed tomography by an iterative approximation approach.

    PubMed

    Shi, Hongli; Yang, Zhi; Luo, Shuqian

    2017-01-01

    The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.

  5. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish

    PubMed Central

    Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon

    2015-01-01

    Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086

  6. Image recombination transform algorithm for superresolution structured illumination microscopy

    PubMed Central

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-01-01

    Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1  W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935

  7. The finite state projection algorithm for the solution of the chemical master equation.

    PubMed

    Munsky, Brian; Khammash, Mustafa

    2006-01-28

    This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.

  8. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  9. Combining model based and data based techniques in a robust bridge health monitoring algorithm.

    DOT National Transportation Integrated Search

    2014-09-01

    Structural Health Monitoring (SHM) aims to analyze civil, mechanical and aerospace systems in order to assess : incipient damage occurrence. In this project, we are concerned with the development of an algorithm within the : SHM paradigm for applicat...

  10. Greenhouse gas observations from space: The GHG-CCI project of ESA's Climate Change Initiative

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Noël, Stefan; Bergamaschi, Peter; Boesch, Hartmut; Bovensmann, Heinrich; Notholt, Justus; Schneising, Oliver; Hasekamp, Otto; Reuter, Maximilian; Parker, Robert; Dils, Bart; Chevallier, Frederic; Zehner, Claus; Burrows, John

    2012-07-01

    The GHG-CCI project (http://www.esa-ghg-cci.org) is one of several projects of ESA's Climate Change Initiative (CCI), which will deliver various Essential Climate Variables (ECVs). The goal of GHG-CCI is to deliver global satellite-derived data sets of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) suitable to obtain information on regional CO2 and CH4 surface sources and sinks as needed for better climate prediction. The GHG-CCI core ECV data products are column-averaged mole fractions of CO2 and CH4, XCO2 and XCH4, retrieved from SCIAMACHY on ENVISAT and TANSO on GOSAT. Other satellite instruments will be used to provide constraints in upper layers such as IASI, MIPAS, and ACE-FTS. Which of the advanced algorithms, which are under development, will be the best for a given data product still needs to be determined. For each of the 4 GHG-CCI core data products - XCO2 and XCH4 from SCIAMACHY and GOSAT - several algorithms are being further developed and the corresponding data products are inter-compared to identify which data product is the most appropriate. This includes comparisons with corresponding data products generated elsewhere, most notably with the operational data products of GOSAT generated at NIES and the NASA/ACOS GOSAT XCO2 product. This activity, the so-called "Round Robin exercise", will be performed in the first two years of this project. At the end of the 2 year Round Robin phase (end of August 2012) a decision will be made which of the algorithms performs best. The selected algorithms will be used to generate the first version of the ECV GHG. In the last six months of this 3 year project the resulting data products will be validated and made available to all interested users. In the presentation and overview about this project will be given focussing on the latest results.

  11. Greenhouse Gas CCI Project (GHG-CCI): Overview and current status

    NASA Astrophysics Data System (ADS)

    Buchwitz, M.; Burrows, J. P.; Reuter, M.; Schneising, O.; Noel, S.; Bovensmann, H.; Notholt, J.; Boesch, H.; Parker, R.; Hasekamp, O. P.; Guerlet, S.; Aben, I.; Lichtenberg, G.; Crevoisier, C. D.; Chedin, A.; Stiller, G. P.; Laeng, A.; Butz, A.; Blumenstock, T.; Orphal, J.; Sussmann, R.; De Maziere, M. M.; Dils, B.; Brunner, D.; Popp, C. T.; Buchmann, B.; Chevallier, F.; Bergamaschi, P. M.; Frankenberg, C.; Zehner, C.

    2011-12-01

    The GHG-CCI project is one of several projects of ESA's Climate Change Initiative (CCI), which will deliver various Essential Climate Variables (ECVs). The goal of GHG-CCI is to deliver global satellite-derived data sets of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) suitable to obtain information on regional CO2 and CH4 surface sources and sinks as needed for better climate prediction. The GHG-CCI core ECV data products are column-averaged mole fractions of CO2 and CH4, i.e., XCO2 and XCH4, retrieved from SCIAMACHY on ENVISAT and TANSO on GOSAT. Other satellite instruments will be used to provide constraints in upper layers such as IASI, MIPAS, and ACE-FTS. Which of the advanced algorithms, which are under development, will be the best for a given data product still needs to be determined. For each of the 4 GHG-CCI core data products - XCO2 and XCH4 from SCIAMACHY and GOSAT - several algorithms will be further developed and the corresponding data products will be inter-compared to identify which data product is the most appropriate. This includes comparisons with corresponding data products generated elsewhere, most notably with the operational data products of GOSAT generated at NIES and the NASA/ACOS GOSAT XCO2 product. This activity, the so-called "Round Robin exercise", will be performed in the first two years of this project. At the end of the 2 year Round Robin phase a decision will be made which of the algorithms performs best. The selected algorithms will be used to generate the first version of the ECV GHG. In the last six months of this 3 year project the resulting data products will be validated and made available to all interested users. In the presentation and overview about this project will be given. Focus will be on a discussion and intercomparison of the various data products focusing on CO2.

  12. The GHG-CCI Project to Deliver the Essential Climate Variable Greenhouse Gases: Current status

    NASA Astrophysics Data System (ADS)

    Buchwitz, M.; Boesch, H.; Reuter, M.

    2012-04-01

    The GHG-CCI project (http://www.esa-ghg-cci.org) is one of several projects of ESA's Climate Change Initiative (CCI), which will deliver various Essential Climate Variables (ECVs). The goal of GHG-CCI is to deliver global satellite-derived data sets of the two most important anthropogenic greenhouse gases (GHGs) carbon dioxide (CO2) and methane (CH4) suitable to obtain information on regional CO2 and CH4 surface sources and sinks as needed for better climate prediction. The GHG-CCI core ECV data products are column-averaged mole fractions of CO2 and CH4, XCO2 and XCH4, retrieved from SCIAMACHY on ENVISAT and TANSO on GOSAT. Other satellite instruments will be used to provide constraints in upper layers such as IASI, MIPAS, and ACE-FTS. Which of the advanced algorithms, which are under development, will be the best for a given data product still needs to be determined. For each of the 4 GHG-CCI core data products - XCO2 and XCH4 from SCIAMACHY and GOSAT - several algorithms are bing further developed and the corresponding data products are inter-compared to identify which data product is the most appropriate. This includes comparisons with corresponding data products generated elsewhere, most notably with the operational data products of GOSAT generated at NIES and the NASA/ACOS GOSAT XCO2 product. This activity, the so-called "Round Robin exercise", will be performed in the first two years of this project. At the end of the 2 year Round Robin phase (end of August 2012) a decision will be made which of the algorithms performs best. The selected algorithms will be used to generate the first version of the ECV GHG. In the last six months of this 3 year project the resulting data products will be validated and made available to all interested users. In the presentation and overview about this project will be given focussing on the latest results.

  13. Exact rebinning methods for three-dimensional PET.

    PubMed

    Liu, X; Defrise, M; Michel, C; Sibomana, M; Comtat, C; Kinahan, P; Townsend, D

    1999-08-01

    The high computational cost of data processing in volume PET imaging is still hindering the routine application of this successful technique, especially in the case of dynamic studies. This paper describes two new algorithms based on an exact rebinning equation, which can be applied to accelerate the processing of three-dimensional (3-D) PET data. The first algorithm, FOREPROJ, is a fast-forward projection algorithm that allows calculation of the 3-D attenuation correction factors (ACF's) directly from a two-dimensional (2-D) transmission scan, without first reconstructing the attenuation map and then performing a 3-D forward projection. The use of FOREPROJ speeds up the estimation of the 3-D ACF's by more than a factor five. The second algorithm, FOREX, is a rebinning algorithm that is also more than five times faster, compared to the standard reprojection algorithm (3DRP) and does not suffer from the image distortions generated by the even faster approximate Fourier rebinning (FORE) method at large axial apertures. However, FOREX is probably not required by most existing scanners, as the axial apertures are not large enough to show improvements over FORE with clinical data. Both algorithms have been implemented and applied to data simulated for a scanner with a large axial aperture (30 degrees), and also to data acquired with the ECAT HR and the ECAT HR+ scanners. Results demonstrate the excellent accuracy achieved by these algorithms and the important speedup when the sinogram sizes are powers of two.

  14. Derived crop management data for the LandCarbon Project

    USGS Publications Warehouse

    Schmidt, Gail; Liu, Shu-Guang; Oeding, Jennifer

    2011-01-01

    The LandCarbon project is assessing potential carbon pools and greenhouse gas fluxes under various scenarios and land management regimes to provide information to support the formulation of policies governing climate change mitigation, adaptation and land management strategies. The project is unique in that spatially explicit maps of annual land cover and land-use change are created at the 250-meter pixel resolution. The project uses vast amounts of data as input to the models, including satellite, climate, land cover, soil, and land management data. Management data have been obtained from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) and USDA Economic Research Service (ERS) that provides information regarding crop type, crop harvesting, manure, fertilizer, tillage, and cover crop (U.S. Department of Agriculture, 2011a, b, c). The LandCarbon team queried the USDA databases to pull historic crop-related management data relative to the needs of the project. The data obtained was in table form with the County or State Federal Information Processing Standard (FIPS) and the year as the primary and secondary keys. Future projections were generated for the A1B, A2, B1, and B2 Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) scenarios using the historic data values along with coefficients generated by the project. The PBL Netherlands Environmental Assessment Agency (PBL) Integrated Model to Assess the Global Environment (IMAGE) modeling framework (Integrated Model to Assess the Global Environment, 2006) was used to develop coefficients for each IPCC SRES scenario, which were applied to the historic management data to produce future land management practice projections. The LandCarbon project developed algorithms for deriving gridded data, using these tabular management data products as input. The derived gridded crop type, crop harvesting, manure, fertilizer, tillage, and cover crop products are used as input to the LandCarbon models to represent the historic and the future scenario management data. The overall algorithm to generate each of the gridded management products is based on the land cover and the derived crop type. For each year in the land cover dataset, the algorithm loops through each 250-meter pixel in the ecoregion. If the current pixel in the land cover dataset is an agriculture pixel, then the crop type is determined. Once the crop type is derived, then the crop harvest, manure, fertilizer, tillage, and cover crop values are derived independently for that crop type. The following is the overall algorithm used for the set of derived grids. The specific algorithm to generate each management dataset is discussed in the respective section for that dataset, along with special data handling and a description of the output product.

  15. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    NASA Astrophysics Data System (ADS)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min

  16. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  17. Correcting Satellite Image Derived Surface Model for Atmospheric Effects

    NASA Technical Reports Server (NTRS)

    Emery, William; Baldwin, Daniel

    1998-01-01

    This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model was a successor to the 5S and slightly more advanced, the 5S was selected because outputs from the individual components comprising the short-wave radiation budget were more easily separated. The separation was necessary since both the 5S and 6S did not include geometrical corrections for terrain, a fundamental constituent of the BMPE algorithm. The 5S correction code was incorporated into the BMPE algorithm and many sensitivity studies were performed.

  18. Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: Application to fast environmental transmission electron tomography.

    PubMed

    Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita

    2018-06-01

    Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  20. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  1. An improved three-dimension reconstruction method based on guided filter and Delaunay

    NASA Astrophysics Data System (ADS)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  2. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  3. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  4. Abstract - Cooperative Research and Development Agreement between Ames National Laboratory and National Energy Technology Laboratory AGMT-0609

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryden, Mark; Tucker, David A.

    The goal of this project is to develop a merged environment for simulation and analysis (MESA) at the National Energy Technology Laboratory’s (NETL) Hybrid Performance (Hyper) project laboratory. The MESA sensor lab developed as a component of this research will provide a development platform for investigating: 1) advanced control strategies, 2) testing and development of sensor hardware, 3) various modeling in-the-loop algorithms and 4) other advanced computational algorithms for improved plant performance using sensors, real-time models, and complex systems tools.

  5. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  6. Limited angle C-arm tomosynthesis reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  7. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  8. Accessing eSDO Solar Image Processing and Visualization through AstroGrid

    NASA Astrophysics Data System (ADS)

    Auden, E.; Dalla, S.

    2008-08-01

    The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.

  9. Biomedical Terminology Mapper for UML projects.

    PubMed

    Thibault, Julien C; Frey, Lewis

    2013-01-01

    As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.

  10. Biomedical Terminology Mapper for UML projects

    PubMed Central

    Thibault, Julien C.; Frey, Lewis

    As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies. PMID:24303278

  11. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  12. Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique

    NASA Technical Reports Server (NTRS)

    Tiampo, Kristy F.

    1999-01-01

    In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.

  13. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  14. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  15. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  16. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  17. A 3D Split Manufacturing Approach to Trustworthy System Development

    DTIC Science & Technology

    2012-12-01

    addition of any cryptographic algorithm or implementation to be included in the system as a foundry-level option. Essentially, 3D security introduces...8192 bytes). We modeled our cryptographic process after the AES algorithm , which can occupy up to 4640 bytes with an enlarged T-Box implementation [4...Reconfigurable Systems and Algorithms (ERSA), Las Vegas, NV, July 2011. [10] Intelligence Advanced Research Projects Agency (IARPA). Trusted integrated

  18. Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application

    DTIC Science & Technology

    2016-02-26

    Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in

  19. Development and demonstration of a freezing drizzle algorithm for roadway environmental sensing Systems.

    DOT National Transportation Integrated Search

    2012-10-01

    The primary goal of this project is to demonstrate the accuracy and utility of a freezing drizzle algorithm that can be implemented on roadway environmental sensing systems (ESSs). : The types of problems related to the occurrence of freezing precipi...

  20. A modified 3D algorithm for road traffic noise attenuation calculations in large urban areas.

    PubMed

    Wang, Haibo; Cai, Ming; Yao, Yifan

    2017-07-01

    The primary objective of this study is the development and application of a 3D road traffic noise attenuation calculation algorithm. First, the traditional empirical method does not address problems caused by non-direct occlusion by buildings and the different building heights. In contrast, this study considers the volume ratio of the buildings and the area ratio of the projection of buildings adjacent to the road. The influence of the ground affection is analyzed. The insertion loss due to barriers (infinite length and finite barriers) is also synthesized in the algorithm. Second, the impact of different road segmentation is analyzed. Through the case of Pearl River New Town, it is recommended that 5° is the most appropriate scanning angle as the computational time is acceptable and the average error is approximately 3.1 dB. In addition, the algorithm requires only 1/17 of the time that the beam tracking method requires at the cost of more imprecise calculation results. Finally, the noise calculation for a large urban area with a high density of buildings shows the feasibility of the 3D noise attenuation calculation algorithm. The algorithm is expected to be applied in projects requiring large area noise simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  2. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  3. Algorithm theoretical baseline for formaldehyde retrievals from S5P TROPOMI and from the QA4ECV project

    NASA Astrophysics Data System (ADS)

    De Smedt, Isabelle; Theys, Nicolas; Yu, Huan; Danckaert, Thomas; Lerot, Christophe; Compernolle, Steven; Van Roozendael, Michel; Richter, Andreas; Hilboll, Andreas; Peters, Enno; Pedergnana, Mattia; Loyola, Diego; Beirle, Steffen; Wagner, Thomas; Eskes, Henk; van Geffen, Jos; Folkert Boersma, Klaas; Veefkind, Pepijn

    2018-04-01

    On board the Copernicus Sentinel-5 Precursor (S5P) platform, the TROPOspheric Monitoring Instrument (TROPOMI) is a double-channel, nadir-viewing grating spectrometer measuring solar back-scattered earthshine radiances in the ultraviolet, visible, near-infrared, and shortwave infrared with global daily coverage. In the ultraviolet range, its spectral resolution and radiometric performance are equivalent to those of its predecessor OMI, but its horizontal resolution at true nadir is improved by an order of magnitude. This paper introduces the formaldehyde (HCHO) tropospheric vertical column retrieval algorithm implemented in the S5P operational processor and comprehensively describes its various retrieval steps. Furthermore, algorithmic improvements developed in the framework of the EU FP7-project QA4ECV are described for future updates of the processor. Detailed error estimates are discussed in the light of Copernicus user requirements and needs for validation are highlighted. Finally, verification results based on the application of the algorithm to OMI measurements are presented, demonstrating the performances expected for TROPOMI.

  4. An Overview of the JPSS Ground Project Algorithm Integration Process

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.

    2015-12-01

    The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to identify unintended outcomes. Overall the presentation provides a general and easy to follow overview of the JPSS Algorithm Change Process (ACP) and is intended to facility the audience understanding of a very extensive and complex process.

  5. Report on the deuterium retention in CVD coated W on SiC in support of the Ultramet Company’s Small Business Innovation Research (SBIR) project: SOW DE-FG02-07ER84941

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masashi Shimada

    2012-06-01

    A tungsten (W) coated (0.0005-inch thickness) silicon carbide (SiC) (1.0-inch diameter and 0.19-inch thickness) sample was exposed to a divertor relevant high-flux (~1022 m-2s-1) deuterium plasma at 200 and 400°C in the Idaho National Laboratory’s (INL’s) Tritium Plasma Experiment (TPE), and the total deuterium retention was subsequently measured via the thermal desorption spectroscopy (TDS) method. The deuterium retentions were 6.4x1019 m-2 and 1.7x1020 m-2, for 200 and 400°C exposure, respectively. The Tritium Migration Analysis Program (TMAP) was used to analyze the measured TDS spectrum to investigate the deuterium behavior in the W coated SiC, and the results indicated that mostmore » of the deuterium was trapped in the W coated layer even at 400°C. This thin W layer (0.0005-inch ~ 13µm thickness) prevented deuterium ions from bombarding directly into the SiC substrate, minimizing erosion of SiC and damage creation via ion bombardment. The shift in the D desorption peak in the TDS spectra from 200 C to 400°C can be attributed to D migration to the bulk material. This unexpectedly low deuterium retention and short migration might be due to the porous nature of the tungsten coating, which can decrease the solution concentration of deuterium atoms.« less

  6. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  7. Color transfer algorithm in medical images

    NASA Astrophysics Data System (ADS)

    Wang, Weihong; Xu, Yangfa

    2007-12-01

    In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.

  8. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, Dirk; Grass, Michael; Haar, Peter van de

    2011-05-15

    Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting andmore » back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. Conclusions: The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.« less

  9. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  10. TH-AB-202-01: Daily Lung Tumor Motion Characterization On EPIDs Using a Markerless Tiling Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozario, T; University of Texas at Dallas, Richardson, TX; Chiu, T

    Purpose: Tracking lung tumor motion in real time allows for target dose escalation while simultaneously reducing dose to sensitive structures, thus increasing local control without increasing toxicity. We present a novel intra-fractional markerless lung tumor tracking algorithm using MV treatment beam images acquired during treatment delivery. Strong signals superimposed on the tumor significantly reduced the soft tissue resolution; while different imaging modalities involved introduce global imaging discrepancies. This reduced the comparison accuracies. A simple yet elegant Tiling algorithm is reported to overcome the aforementioned issues. Methods: MV treatment beam images were acquired continuously in beam’s eye view (BEV) by anmore » electronic portal imaging device (EPID) during treatment and analyzed to obtain tumor positions on every frame. Every frame of the MV image was simulated by a composite of two components with separate digitally reconstructed radiographs (DRRs): all non-moving structures and the tumor. This Titling algorithm divides the global composite DRR and the corresponding MV projection into sub-images called tiles. Rigid registration is performed independently on tile-pairs in order to improve local soft tissue resolution. This enables the composite DRR to be transformed accurately to match the MV projection and attain a high correlation value through a pixel-based linear transformation. The highest cumulative correlation for all tile-pairs achieved over a user-defined search range indicates the 2-D coordinates of the tumor location on the MV projection. Results: This algorithm was successfully applied to cine-mode BEV images acquired during two SBRT plans delivered five times with different motion patterns to each of two phantoms. Approximately 15000 beam’s eye view images were analyzed and tumor locations were successfully identified on every projection with a maximum/average error of 1.8 mm / 1.0 mm. Conclusion: Despite the presence of strong anatomical signal overlapping with tumor images, this markerless detection algorithm accurately tracks intrafractional lung tumor motions. This project is partially supported by an Elekta research grant.« less

  11. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  12. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  13. Detecting asphalt pavement raveling using emerging 3D laser technology and macrotexture analysis.

    DOT National Transportation Integrated Search

    2015-08-01

    This research project comprehensively tested and validated the automatic raveling detection, classification, : and measurement algorithms using 3D laser technology that were developed through a project sponsored by : the National Cooperative Highway ...

  14. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  15. Final report for “Extreme-scale Algorithms and Solver Resilience”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, William Douglas

    2017-06-30

    This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less

  16. Projected power iteration for network alignment

    NASA Astrophysics Data System (ADS)

    Onaran, Efe; Villar, Soledad

    2017-08-01

    The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Leahy, R.M.

    A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less

  18. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Reduction of metal artifacts: beam hardening and photon starvation effects

    NASA Astrophysics Data System (ADS)

    Yadava, Girijesh K.; Pal, Debashish; Hsieh, Jiang

    2014-03-01

    The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.

  20. Flat panel detector-based cone beam computed tomography with a circle-plus-two-arcs data acquisition orbit: preliminary phantom study.

    PubMed

    Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng

    2003-07-01

    Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.

  1. The algorithm for duration acceleration of repetitive projects considering the learning effect

    NASA Astrophysics Data System (ADS)

    Chen, Hongtao; Wang, Keke; Du, Yang; Wang, Liwan

    2018-03-01

    Repetitive project optimization problem is common in project scheduling. Repetitive Scheduling Method (RSM) has many irreplaceable advantages in the field of repetitive projects. As the same or similar work is repeated, the proficiency of workers will be correspondingly low to high, and workers will gain experience and improve the efficiency of operations. This is learning effect. Learning effect is one of the important factors affecting the optimization results in repetitive project scheduling. This paper analyzes the influence of the learning effect on the controlling path in RSM from two aspects: one is that the learning effect changes the controlling path, the other is that the learning effect doesn't change the controlling path. This paper proposes corresponding methods to accelerate duration for different types of critical activities and proposes the algorithm for duration acceleration based on the learning effect in RSM. And the paper chooses graphical method to identity activities' types and considers the impacts of the learning effect on duration. The method meets the requirement of duration while ensuring the lowest acceleration cost. A concrete bridge construction project is given to verify the effectiveness of the method. The results of this study will help project managers understand the impacts of the learning effect on repetitive projects, and use the learning effect to optimize project scheduling.

  2. The Application of the Real Options Method for the Evaluation of High-Rise Construction Projects

    NASA Astrophysics Data System (ADS)

    Izotov, Aleksandr; Rostova, Olga; Dubgorn, Alissa

    2018-03-01

    The paper is devoted to the problem of evaluation of high-rise construction projects in a rapidly changing environment. The authors proposed an algorithm for constructing and embedding real options in high-rise construction projects, which makes it possible to increase the flexibility of managing multi-stage projects that have the ability to adapt to changing conditions of implementation.

  3. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  4. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    DOE PAGES

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; ...

    2017-08-08

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less

  5. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less

  6. The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; Cerati, Giuseppe; Gray, Lindsey; Kowalkowski, Jim; Mudigonda, Mayur; Prabhat; Spentzouris, Panagiotis; Spiropoulou, Maria; Tsaris, Aristeidis; Vlimant, Jean-Roch; Zheng, Stephan

    2017-08-01

    Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.

  7. Python and computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doak, J. E.; Prasad, Lakshman

    2002-01-01

    This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, andmore » (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.« less

  8. Fast local reconstruction by selective backprojection for low dose in dental computed tomography

    NASA Astrophysics Data System (ADS)

    Yan, Bin; Deng, Lin; Han, Yu; Zhang, Feng; Wang, Xian-Chao; Li, Lei

    2014-10-01

    The high radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which becomes a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce the radiation dose by reconstructing the images from truncated data in a short scan. In a dental CT, it could reduce the radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to the other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF has been proposed to improve the calculation efficiency by rearranging the projection. However, the memory-consuming data rebinning process is included. Accordingly, the selective BPF (S-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of the projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection. The finite Hilbert inverse is then applied to each PI-line segment. S-BPF avoids the influence of the variable limit of integration by selective backprojection without additional time cost or memory cost. The simulation experiment and the real experiment demonstrated the higher reconstruction efficiency of S-BPF.

  9. LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors

    NASA Astrophysics Data System (ADS)

    Snider, E. L.; Petrillo, G.

    2017-10-01

    LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.

  10. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    NASA Technical Reports Server (NTRS)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  11. Web-Based Library and Algorithm System for Satellite and Airborne Image Products

    DTIC Science & Technology

    2011-01-01

    the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI). 5...matching algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a...transitioning VSW algorithms developed by Dr. Curtis D. Mobley at Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding.

  12. A Web-Based Library and Algorithm System for Satellite and Airborne Image Products

    DTIC Science & Technology

    2011-06-28

    Sequoia Scientific, Inc., and Dr. Paul Bissett at FERI, under other 6.1/6.2 program funding. 2 A Web-Based Library And Algorithm System For...of the spectrum matching approach to inverting hyperspectral imagery created by Drs. C. Mobley ( Sequoia Scientific) and P. Bissett (FERI...algorithms developed by Sequoia Scientific and FERI. Testing and Implementation of Library This project will result in the delivery of a WeoGeo

  13. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  14. Self-Cohering Airborne Distributed Array

    DTIC Science & Technology

    1988-06-01

    F19628-84- C -0080 ft. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT JTASK JWORK UNIT Hanscom APE MA 01731-5000...algorithms under consideration (including the newly developed algorithms). The algorithms are classified both according to the type c -f processing and...4.1 RADIO CAMERA DATA FORMAT AND PROCEDURES (FROM C -23) The range trace delivered by each antenna element is stonred as a rc’w of coimplex number-s

  15. Conical Perspective Image of an Architectural Object Close to Human Perception

    NASA Astrophysics Data System (ADS)

    Dzwierzynska, Jolanta

    2017-10-01

    The aim of the study is to develop a method of computer aided constructing conical perspective of an architectural object, which is close to human perception. The conical perspective considered in the paper is a central projection onto a projection surface being a conical rotary surface or a fragment of it. Whereas, the centre of projection is a stationary point or a point moving on a circular path. The graphical mapping results of the perspective representation is realized directly on an unrolled flat projection surface. The projective relation between a range of points on a line and the perspective image of the same range of points received on a cylindrical projection surface permitted to derive formulas for drawing perspective. Next, the analytical algorithms for drawing perspective image of a straight line passing through any two points were formulated. It enabled drawing a perspective wireframe image of a given 3D object. The use of the moving view point as well as the application of the changeable base elements of perspective as the variables in the algorithms enable drawing conical perspective from different viewing positions. Due to this fact, the perspective drawing method is universal. The algorithms are formulated and tested in Mathcad Professional software, but can be implemented in AutoCAD and majority of computer graphical packages, which makes drawing a perspective image more efficient and easier. The presented conical perspective representation, and the convenient method of its mapping directly on the flat unrolled surface can find application for numerous advertisement and art presentations.

  16. Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Won; Kim, Jong-Hyo

    2011-03-01

    Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV reconstruction algorithm. Our results show the potential of the proposed algorithm for image quality improvement, which in turn may lead to dose reduction.

  17. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Magome, T; Nakano, M

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less

  18. WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Huang, C; Keall, P

    2015-06-15

    Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less

  19. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  20. Super-resolution reconstruction for 4D computed tomography of the lung via the projections onto convex sets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei

    2014-11-01

    Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less

  1. First-Principle Construction of U(1) Symmetric Matrix Product States

    NASA Astrophysics Data System (ADS)

    Rakov, Mykhailo V.

    2018-07-01

    The algorithm to calculate the sets of symmetry sectors for virtual indices of U(1) symmetric matrix product states (MPS) is described. The principal differences between open (OBC) and periodic (PBC) boundary conditions are stressed, and the extension of PBC MPS algorithm to projected entangled pair states is outlined.

  2. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  3. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  4. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal scales for the 2005-2007 reference period will be disclosed. The skill of these algorithms to close the water balance over the continents will be assessed by comparisons to runoff data. The consistency in forcing data will allow to (a) evaluate the skill of these five algorithms in producing ET over particular ecosystems, (b) facilitate the attribution of the observed differences to either algorithms or driving data, and (c) set up a solid scientific basis for the development of global long-term benchmark ET products. Project progress can be followed on our website http://wacmoset.estellus.eu. REFERENCES Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. Jung, M. et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 467, 951-954, 2010. Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi- dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  5. Comparing Methods for Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Lai, Chok Fung

    2011-01-01

    This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.

  6. Estimation of potential evapotranspiration from extraterrestrial radiation, air temperature and humidity to assess future climate change effects on the vegetation of the Northern Great Plains, USA

    USGS Publications Warehouse

    King, David A.; Bachelet, Dominique M.; Symstad, Amy J.; Ferschweiler, Ken; Hobbins, Michael

    2014-01-01

    The potential evapotranspiration (PET) that would occur with unlimited plant access to water is a central driver of simulated plant growth in many ecological models. PET is influenced by solar and longwave radiation, temperature, wind speed, and humidity, but it is often modeled as a function of temperature alone. This approach can cause biases in projections of future climate impacts in part because it confounds the effects of warming due to increased greenhouse gases with that which would be caused by increased radiation from the sun. We developed an algorithm for linking PET to extraterrestrial solar radiation (incoming top-of atmosphere solar radiation), as well as temperature and atmospheric water vapor pressure, and incorporated this algorithm into the dynamic global vegetation model MC1. We tested the new algorithm for the Northern Great Plains, USA, whose remaining grasslands are threatened by continuing woody encroachment. Both the new and the standard temperature-dependent MC1 algorithm adequately simulated current PET, as compared to the more rigorous PenPan model of Rotstayn et al. (2006). However, compared to the standard algorithm, the new algorithm projected a much more gradual increase in PET over the 21st century for three contrasting future climates. This difference led to lower simulated drought effects and hence greater woody encroachment with the new algorithm, illustrating the importance of more rigorous calculations of PET in ecological models dealing with climate change.

  7. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  8. Photodynamic inactivation of Candida albicans sensitized by tri- and tetra-cationic porphyrin derivatives.

    PubMed

    Cormick, M Paula; Alvarez, M Gabriela; Rovera, Marisa; Durantini, Edgardo N

    2009-04-01

    The photodynamic action of 5-(4-trifluorophenyl)-10,15,20-tris(4-trimethylammoniumphenyl)porphyrin iodide (TFAP(3+)) and 5,10,15,20-tetra(4-N,N,N-trimethylammonium phenyl)porphyrin p-tosylate (TMAP(4+)) has been studied in vitro on Candida albicans. The results of these cationic porphyrins were compared with those of 5,10,15,20-tetra(4-sulphonatophenyl)porphyrin (TPPS(4-)), which characterizes an anionic sensitizer. In vitro investigations show that these cationic porphyrins are rapidly bound to C. albicans cells, reaching a value of approximately 1.4 nmol/10(6) cells, when the cellular suspensions were incubated with 5 microM sensitizer for 30 min. In contrast, TPPS(4-) is poorly uptaken by yeast cells. The fluorescence spectra of these sensitizers into the cells confirm this behaviour. The amount of porphyrin binds to cells is dependent on both sensitizer concentrations (1-5 microM) and cells densities (10(6)-10(8) cells/mL). Photosensitized inactivation of C. albicans cellular suspensions increases with sensitizer concentration, causing a approximately 5 log decrease of cell survival, when the cultures are treated with 5 microM of cationic porphyrin and irradiated for 30 min. However, the photocytotoxicity decreases with an increase in the cell density, according to its low binding to cells. Under these conditions, the photodynamic activity of TFAP(3+) is quite similar to that produced by TMAP(4+), whereas no important inactivation effect was found for TPPS(4)(-). The high photodynamic activity of cationic porphyrins was confirmed by growth delay experiments. Thus, C. albicans cell growth was not detected in the presence of 5 microM TFAP(3+). Photodynamic inactivation capacities of these sensitizers were also evaluated on C. albicans cells growing in colonies on agar surfaces. Cationic porphyrins produce a growth delay of C. albicans colonies and viability of cells was not observed after 3 h irradiation, indicating a complete inactivation of yeast cells. Therefore, these results indicate that these cationic porphyrins are interesting sensitizers for photodynamic inactivation of yeasts in liquid suspensions or in localized foci of infection.

  9. Test-Retest Reliability of Memory Task fMRI in Alzheimer’s Disease Clinical Trials

    PubMed Central

    Atri, Alireza; O’Brien, Jacqueline L.; Sreenivasan, Aishwarya; Rastegar, Sarah; Salisbury, Sibyl; DeLuca, Amy N.; O’Keefe, Kelly M.; LaViolette, Peter S.; Rentz, Dorene M.; Locascio, Joseph J.; Sperling, Reisa A.

    2012-01-01

    Objective To examine feasibility and test-retest reliability of encoding-task functional MRI (fMRI) in mild Alzheimer’s disease (AD). Design Randomized, double-blind, placebo-controlled (RCT) study. Setting Memory clinical trials unit. Participants Twelve subjects with mild AD (MMSE 24.0±0.7, CDR 1), on >6 months stable donepezil, from the placebo-arm of a larger 24-week (n=24, four scans on weeks 0,6,12,24) study. Interventions Placebo and three face-name paired-associate encoding, block-design BOLD-fMRI scans in 12 weeks. Main Outcomes Whole-brain t-maps (p<0.001, 5-contiguous voxels) and hippocampal regions-of-interest (ROI) analyses of extent (EXT, %voxels active) and magnitude (MAG, %signal change) for Novel-greater-than-Repeated (N>R) face-name contrasts. Calculation of Intraclass Correlations (ICC) and power estimates for hippocampal ROIs. Results Task-tolerability and data yield were high (95 of 96 scans yield good quality data). Whole-brain maps were stable. Right and left hippocampal ROI ICCs were 0.59–0.87 and 0.67–0.74, respectively. To detect 25–50% changes in 0–12 week hippocampal activity using L/R-EXT or R-MAG with 80% power (2-sided-α=0.05) requires 14–51 subjects. Using L-MAG requires >125 subjects due to relatively small signals to variance ratios. Conclusions Encoding-task fMRI was successfully implemented in a single-site, 24-week, AD RCT. Week 0–12 whole-brain t-maps were stable and test-retest reliability of hippocampal fMRI measures ranged from moderate to substantial. Right hippocampal-MAG may be the most promising of these candidate measures in a leveraged context. These initial estimates of test-retest reliability and power justify evaluation of encoding-task fMRI as a potential biomarker for “signal-of-effect” in exploratory and proof-of-concept trials in mild AD. Validation of these results with larger sample sizes and assessment in multi-site studies is warranted. PMID:21555634

  10. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  11. Using a Genetic Algorithm to Learn Behaviors for Autonomous Vehicles,

    DTIC Science & Technology

    1992-08-12

    Truly autonomous vehicles will require both projective planning and reactive components in order to perform robustly. Projective components are...long time period. This work addresses the problem of creating reactive components for autonomous vehicles . Creating reactive behaviors (stimulus

  12. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  13. Algorithm implementation on the Navier-Stokes computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krist, S.E.; Zang, T.A.

    1987-03-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  14. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  15. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  16. In Vivo Measurement of Drug Efficacy in Breast Cancer

    DTIC Science & Technology

    2015-10-01

    treatment. 15. SUBJECT TERMS Breast Cancer, Intravital Imaging , Nanoparticles, Pharmacokinetics/ Pharmacodynamics, Chemotherapy, Drug Distribution 16...drug testing in year 2 of the project. 2. KEYWORDS Breast Cancer, Intravital Imaging , Nanoparticles, Pharmacokinetics/ Pharmacodynamics, Chemotherapy...left side of the diagram displays the overall proposed algorithm for analyzing intravital images and determining drug concentration. This algorithm is

  17. Automatic Program Synthesis Reports.

    ERIC Educational Resources Information Center

    Biermann, A. W.; And Others

    Some of the major results of future goals of an automatic program synthesis project are described in the two papers that comprise this document. The first paper gives a detailed algorithm for synthesizing a computer program from a trace of its behavior. Since the algorithm involves a search, the length of time required to do the synthesis of…

  18. Dynamic Group Formation Based on a Natural Phenomenon

    ERIC Educational Resources Information Center

    Zedadra, Amina; Lafifi, Yacine; Zedadra, Ouarda

    2016-01-01

    This paper presents a new approach of learners grouping in collaborative learning systems. This grouping process is based on traces left by learners. The goal is the circular dynamic grouping to achieve collaborative projects. The proposed approach consists of two main algorithms: (1) the circular grouping algorithm and (2) the dynamic grouping…

  19. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, M; Yuan, Y; Lo, Y

    Purpose: To develop a novel strategy to extract the lung tumor motion from cone beam CT (CBCT) projections by an active contour model with interpolated respiration learned from diaphragm motion. Methods: Tumor tracking on CBCT projections was accomplished with the templates derived from planning CT (pCT). There are three major steps in the proposed algorithm: 1) The pCT was modified to form two CT sets: a tumor removed pCT and a tumor only pCT, the respective digitally reconstructed radiographs DRRtr and DRRto following the same geometry of the CBCT projections were generated correspondingly. 2) The DRRtr was rigidly registered withmore » the CBCT projections on the frame-by-frame basis. Difference images between CBCT projections and the registered DRRtr were generated where the tumor visibility was appreciably enhanced. 3) An active contour method was applied to track the tumor motion on the tumor enhanced projections with DRRto as templates to initialize the tumor tracking while the respiratory motion was compensated for by interpolating the diaphragm motion estimated by our novel constrained linear regression approach. CBCT and pCT from five patients undergoing stereotactic body radiotherapy were included in addition to scans from a Quasar phantom programmed with known motion. Manual tumor tracking was performed on CBCT projections and was compared to the automatic tracking to evaluate the algorithm accuracy. Results: The phantom study showed that the error between the automatic tracking and the ground truth was within 0.2mm. For the patients the discrepancy between the calculation and the manual tracking was between 1.4 and 2.2 mm depending on the location and shape of the lung tumor. Similar patterns were observed in the frequency domain. Conclusion: The new algorithm demonstrated the feasibility to track the lung tumor from noisy CBCT projections, providing a potential solution to better motion management for lung radiation therapy.« less

Top