Polarization effects on hard target calibration of lidar systems
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.
1987-01-01
The theory of hard target calibration of lidar backscatter data, including laboratory measurements of the pertinent target reflectance parameters, is extended to include the effects of polarization of the transmitted and received laser radiation. The bidirectional reflectance-distribution function model of reflectance is expanded to a 4 x 4 matrix allowing Mueller matrix and Stokes vector calculus to be employed. Target reflectance parameters for calibration of lidar backscatter data are derived for various lidar system polarization configurations from integrating sphere and monostatic reflectometer measurements. It is found that correct modeling of polarization effects is mandatory for accurate calibration of hard target reflectance parameters and, therefore, for accurate calibration of lidar backscatter data.
Techniques for evaluating optimum data center operation
Hamann, Hendrik F.; Rodriguez, Sergio Adolfo Bermudez; Wehle, Hans-Dieter
2017-06-14
Techniques for modeling a data center are provided. In one aspect, a method for determining data center efficiency is provided. The method includes the following steps. Target parameters for the data center are obtained. Technology pre-requisite parameters for the data center are obtained. An optimum data center efficiency is determined given the target parameters for the data center and the technology pre-requisite parameters for the data center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.
Tajima, Toshiki
2006-04-18
A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
A transformation method for deriving from a photograph, position and heading of a vehicle in a plane
NASA Technical Reports Server (NTRS)
Sleeper, R. K.; Smith, E. G.
1976-01-01
Equations have been derived that transform perspectively viewed planar surface coordinates, as seen in a photograph, into coordinates of the original plane surface. These transformation equations are developed in terms of nine geometric variables that define the photographic setup and are redefined in terms of eight parameters. The parameters are then treated as independent quantities that fully characterize the transformation and are expressed directly in terms of the four corner coordinates of a reference rectangle in the object plane and their coordinates as seen in a photograph. Vehicle position is determined by transforming the perspectively viewed coordinate position of a representative vehicle target into runway coordinates. Vehicle heading is determined from the runway coordinates of two vehicle target points. When the targets are elevated above the plane of the reference grid, the computation of the heading angle is unaffected; however, the computation of the target position may require adjustment of two parameters. Methods are given for adjusting the parameters for elevation and an example is included for both nonelevated and elevated target conditions.
Emissions-critical charge cooling using an organic rankine cycle
Ernst, Timothy C.; Nelson, Christopher R.
2014-07-15
The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.
Detection and Imaging of Moving Targets with LiMIT SAR Data
2017-03-03
include space time adaptive processing (STAP) or displaced phase center antenna (DPCA) [4]–[7]. Page et al. combined constant acceleration target...motion focusing with space-time adaptive processing (STAP), and included the refocusing parameters in the STAP steering vector. Due to inhomogenous...wavelength λ and slow time t, of a moving target after matched filter and passband equalization processing can be expressed as: P (t) = exp ( −j 4π λ ||~rp
Scoping the parameter space for demo and the engineering test facility (ETF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Wayne R.
1999-01-19
In our IFE development plan, we have set a goal of building an Engineering Test Facility (ETF) for a total cost of $2B and a Demo for $3B. In Mike Campbell' s presentation at Madison, we included a viewgraph with an example Demo that had 80 to 250 MWe of net power and showed a plausible argument that it could cost less than $3B. In this memo, I examine the design space for the Demo and then briefly for the ETF. Instead of attempting to estimate the costs of the drivers, I pose the question in a way to definemore » R&D goals: As a function of key design and performance parameters, how much can the driver cost if the total facility cost is limited to the specified goal? The design parameters examined for the Demo included target gain, driver energy, driver efficiency, and net power output. For the ETF; the design parameters are target gain, driver energy, and target yield. The resulting graphs of allowable driver cost determine the goals that the driver R&D programs must seek to meet.« less
Radar Investigations of Asteroids
NASA Technical Reports Server (NTRS)
Ostro, S. J.
1984-01-01
Radar investigations of asteroids, including observations during 1984 to 1985 of at least 8 potential targets and continued analyses of radar data obtained during 1980 to 1984 for 30 other asteroids is proposed. The primary scientific objectives include estimation of echo strength, polarization, spectral shape, spectral bandwidth, and Doppler shift. These measurements yield estimates of target size, shape, and spin vector; place constraints on topography, morphology, density, and composition of the planetary surface; yield refined estimates of target orbital parameters; and reveals the presence of asteroidal satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu
2017-01-23
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-02-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-06-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
NASA Astrophysics Data System (ADS)
Webb, Kevin; Gaind, Vaibhav; Tsai, Hsiaorho; Bentz, Brian; Chelvam, Venkatesh; Low, Philip
2012-02-01
We describe an approach for the evaluation of targeted anti-cancer drug delivery in vivo. The method emulates the drug release and activation process through acceptor release from a targeted donor-acceptor pair that exhibits fluorescence resonance energy transfer (FRET). In this case, folate targeting of the cancer cells is used - 40 % of all human cancers, including ovarian, lung, breast, kidney, brain and colon cancer, over-express folate receptors. We demonstrate the reconstruction of the spatially-dependent FRET parameters in a mouse model and in tissue phantoms. The FRET parameterization is incorporated into a source for a diffusion equation model for photon transport in tissue, in a variant of optical diffusion tomography (ODT) called FRET-ODT. In addition to the spatially-dependent tissue parameters in the diffusion model (absorption and diffusion coefficients), the FRET parameters (donor-acceptor distance and yield) are imaged as a function of position. Modulated light measurements are made with various laser excitation positions and a gated camera. More generally, our method provides a new vehicle for studying disease at the molecular level by imaging FRET parameters in deep tissue, and allows the nanometer FRET ruler to be utilized in deep tissue.
Data communications in a parallel active messaging interface of a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2015-02-03
Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (`RTS`) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.
Data communications in a parallel active messaging interface of a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-11-18
Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (`RTS`) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galambos, John D.; Anderson, David E.; Bechtol, D.
The Second Target Station (STS) is a proposed upgrade for SNS. It includes a doubling of the accelerator power and an additional instrument hall. The new instrument hall will receive a 467 kW 10 Hz beam. The parameters and preliminary design aspects of the STS are presented for the accelerator, target systems, instrument hall, instruments and civil construction aspects.
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.
Bradley, Stuart
2015-11-20
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.
Fotina, I; Lütgendorf-Caucig, C; Stock, M; Pötter, R; Georg, D
2012-02-01
Inter-observer studies represent a valid method for the evaluation of target definition uncertainties and contouring guidelines. However, data from the literature do not yet give clear guidelines for reporting contouring variability. Thus, the purpose of this work was to compare and discuss various methods to determine variability on the basis of clinical cases and a literature review. In this study, 7 prostate and 8 lung cases were contoured on CT images by 8 experienced observers. Analysis of variability included descriptive statistics, calculation of overlap measures, and statistical measures of agreement. Cross tables with ratios and correlations were established for overlap parameters. It was shown that the minimal set of parameters to be reported should include at least one of three volume overlap measures (i.e., generalized conformity index, Jaccard coefficient, or conformation number). High correlation between these parameters and scatter of the results was observed. A combination of descriptive statistics, overlap measure, and statistical measure of agreement or reliability analysis is required to fully report the interrater variability in delineation.
Dianat, Seyed Saeid; Carter, H Ballentine; Schaeffer, Edward M; Hamper, Ulrik M; Epstein, Jonathan I; Macura, Katarzyna J
2015-10-01
Purpose of this pilot study was to correlate quantitative parameters derived from the multiparametric magnetic resonance imaging (MP-MRI) of the prostate with results from MRI guided transrectal ultrasound (MRI/TRUS) fusion prostate biopsy in men with suspected prostate cancer. Thirty-nine consecutive patients who had 3.0T MP-MRI and subsequent MRI/TRUS fusion prostate biopsy were included and 73 MRI-identified targets were sampled by 177 cores. The pre-biopsy MP-MRI consisted of T2-weighted, diffusion weighted (DWI), and dynamic contrast enhanced (DCE) images. The association of quantitative MRI measurements with biopsy histopathology findings was assessed by Mann-Whitney U- test and Kruskal-Wallis test. Of 73 targets, biopsy showed benign prostate tissue in 46 (63%), cancer in 23 (31.5%), and atypia/high grade prostatic intraepithelial neoplasia in four (5.5%) targets. The median volume of cancer-positive targets was 1.3 cm3. The cancer-positive targets were located in the peripheral zone (56.5%), transition zone (39.1%), and seminal vesicle (4.3%). Nine of 23 (39.1%) cancer-positive targets were higher grade cancer (Gleason grade > 6). Higher grade targets and cancer-positive targets compared to benign lesions exhibited lower mean apparent diffusion coefficient (ADC) value (952.7 < 1167.9 < 1278.9), and lower minimal extracellular volume fraction (ECF) (0.13 < 0.185 < 0.213), respectively. The difference in parameters was more pronounced between higher grade cancer and benign lesions. Our findings from a pilot study indicate that quantitative MRI parameters can predict malignant histology on MRI/TRUS fusion prostate biopsy, which is a valuable technique to ensure adequate sampling of MRI-visible suspicious lesions under TRUS guidance and may impact patient management. The DWI-based quantitative measurement exhibits a stronger association with biopsy findings than the other MRI parameters.
Calibration of the Nikon 200 for Close Range Photogrammetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheriff, Lassana; /City Coll., N.Y. /SLAC
2010-08-25
The overall objective of this project is to study the stability and reproducibility of the calibration parameters of the Nikon D200 camera with a Nikkor 20 mm lens for close-range photogrammetric surveys. The well known 'central perspective projection' model is used to determine the camera parameters for interior orientation. The Brown model extends it with the introduction of radial distortion and other less critical variables. The calibration process requires a dense network of targets to be photographed at different angles. For faster processing, reflective coded targets are chosen. Two scenarios have been used to check the reproducibility of the parameters.more » The first one is using a flat 2D wall with 141 coded targets and 12 custom targets that were previously measured with a laser tracker. The second one is a 3D Unistrut structure with a combination of coded targets and 3D reflective spheres. The study has shown that this setup is only stable during a short period of time. In conclusion, this camera is acceptable when calibrated before each use. Future work should include actual field tests and possible mechanical improvements, such as securing the lens to the camera body.« less
Moving target feature phenomenology data collection at China Lake
NASA Astrophysics Data System (ADS)
Gross, David C.; Hill, Jeff; Schmitz, James L.
2002-08-01
This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.
Target-in-the-loop remote sensing of laser beam and atmospheric turbulence characteristics.
Vorontsov, Mikhail A; Lachinova, Svetlana L; Majumdar, Arun K
2016-07-01
A new target-in-the-loop (TIL) atmospheric sensing concept for in situ remote measurements of major laser beam characteristics and atmospheric turbulence parameters is proposed and analyzed numerically. The technique is based on utilization of an integral relationship between complex amplitudes of the counterpropagating optical waves known as overlapping integral or interference metric, whose value is preserved along the propagation path. It is shown that the interference metric can be directly measured using the proposed TIL sensing system composed of a single-mode fiber-based optical transceiver and a remotely located retro-target. The measured signal allows retrieval of key beam and atmospheric turbulence characteristics including scintillation index and the path-integrated refractive index structure parameter.
NASA Astrophysics Data System (ADS)
Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas
2018-05-01
Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.
The UXO Discrimination Study at the Former Camp Sibert
2009-01-01
extrinsic and intrinsic parameters of the target of interest. Extrinsic parameters include the target’s location (easting and northing), orientation and...1-2 1.2 Demonstration Motivation ....................................................................... 1-3 1.3 General Approach...subset of these points. The remaining points are discussed in the summary final report produced by ESTCP [15]. 1-3 1.2 DEMONSTRATION MOTIVATION
An integrated control scheme for space robot after capturing non-cooperative target
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-06-01
How to identify the mass properties and eliminate the unknown angular momentum of space robotic system after capturing a non-cooperative target is of great challenge. This paper focuses on designing an integrated control framework which includes detumbling strategy, coordination control and parameter identification. Firstly, inverted and forward chain approaches are synthesized for space robot to obtain dynamic equation in operational space. Secondly, a detumbling strategy is introduced using elementary functions with normalized time, while the imposed end-effector constraints are considered. Next, a coordination control scheme for stabilizing both base and end-effector based on impedance control is implemented with the target's parameter uncertainty. With the measurements of the forces and torques exerted on the target, its mass properties are estimated during the detumbling process accordingly. Simulation results are presented using a 7 degree-of-freedom kinematically redundant space manipulator, which verifies the performance and effectiveness of the proposed method.
Coherence parameter measurements for neon and hydrogen
NASA Astrophysics Data System (ADS)
Wright, Robert; Hargreaves, Leigh; Khakoo, Murtadha; Zatsarinny, Oleg; Bartschat, Klaus; Stauffer, Al
2015-09-01
We present recent coherence parameter measurements for excitation of neon and hydrogen by 50 eV electrons. The measurements were made using a crossed electron/gas beam spectrometer, featuring a hemispherically selected electron energy analyzer for detecting scattered electrons and double-reflection VUV polarization analyzer to register fluorescence photons. Time-coincidence counting methods on the electron and photon signals were employed to determine Stokes Parameters at each scattering angle, with data measured at angles between 20 - 115 degrees. The data are compared with calculated results using the B-Spline R-Matrix (BSR) and Relativistic Distorted Wave (RDW) approaches. Measurements were made of both the linear (Plin and γ) and circular (Lperp) parameters for the lowest lying excited states in these two targets. We particularly focus on results in the Lperp parameter, which shows unusual behavior in these particular targets, including strong sign changes implying reversal of the angular momentum transfer. In the case of neon, the unusual behavior is well captured by the BSR, but not by other models.
Perturbing engine performance measurements to determine optimal engine control settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less
Design of the Blood Pressure Goals in Dialysis pilot study.
Gul, Ambreen; Miskulin, Dana; Gassman, Jennifer; Harford, Antonia; Horowitz, Bruce; Chen, Joline; Paine, Susan; Bedrick, Edward; Kusek, John W; Unruh, Mark; Zager, Philip
2014-02-01
Cardiovascular disease (CVD) is markedly increased among hemodialysis (HD) patients. Optimizing blood pressure (BP) among HD patients may present an important opportunity to reduce the disparity in CVD rates between HD patients and the general population. The optimal target predialysis systolic BP (SBP) among HD patients is unknown. Current international guidelines, calling for a predialysis SBP < 140 mm Hg, are based on the opinion and extrapolation from the general population. Existing randomized controlled trials (RCTs) were small and did not include prespecified BP targets. The authors described the design of the Blood Pressure in Dialysis (BID) Study, a pilot, multicenter RCT where HD patients are randomized to either a target-standardized predialysis SBP of 110 to 140 mm Hg or 155 to 165 mm Hg. This is the first study to randomize HD patients to 2 different SBP targets. Primary outcomes are feasibility and safety. Feasibility parameters include recruitment and retention rates, adherence with prescribed BP measurements and achievement and maintenance of selected BP targets. Safety parameters include rates of hypotension and other adverse and serious adverse events. The authors obtained preliminary data on changes in left ventricular mass, aortic pulse wave velocity, vascular access thromboses and health-related quality of life across study arms, which may be the secondary outcomes in the full-scale study. The data acquired in the pilot RCT will determine the feasibility and safety and inform the design of a full-scale trial, powered for hard outcomes, which may require 2000 participants.
M$^3$: A New Muon Missing Momentum Experiment to Probe $$(g-2)_{\\mu}$$ and Dark Matter at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, Yonatan; Krnjaic, Gordan; Tran, Nhan
New light, weakly-coupled particles are commonly invoked to address the persistentmore » $$\\sim 4\\sigma$$ anomaly in $$(g-2)_\\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\\mu$$ anomaly, while Phase 2 with $$\\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\\mu - L_\\tau}$$.« less
Progress on LMJ targets for ignition
NASA Astrophysics Data System (ADS)
Cherfils-Clérouin, C.; Boniface, C.; Bonnefille, M.; Dattolo, E.; Galmiche, D.; Gauthier, P.; Giorla, J.; Laffite, S.; Liberatore, S.; Loiseau, P.; Malinie, G.; Masse, L.; Masson-Laborde, P. E.; Monteil, M. C.; Poggi, F.; Seytor, P.; Wagon, F.; Willien, J. L.
2009-12-01
Targets designed to produce ignition on the Laser Megajoule (LMJ) are being simulated in order to set specifications for target fabrication. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4 MJ and 380 TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-ball shaped cocktail hohlraum; with these improvements, a target based on the 240-beam A1040 capsule can be included in the 160-beam laser energy-power space. Robustness evaluations of these different targets shed light on critical points for ignition, which can trade off by tightening some specifications or by preliminary experimental and numerical tuning experiments.
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
Initial study of Schroedinger eigenmaps for spectral target detection
NASA Astrophysics Data System (ADS)
Dorado-Munoz, Leidy P.; Messinger, David W.
2016-08-01
Spectral target detection refers to the process of searching for a specific material with a known spectrum over a large area containing materials with different spectral signatures. Traditional target detection methods in hyperspectral imagery (HSI) require assuming the data fit some statistical or geometric models and based on the model, to estimate parameters for defining a hypothesis test, where one class (i.e., target class) is chosen over the other classes (i.e., background class). Nonlinear manifold learning methods such as Laplacian eigenmaps (LE) have extensively shown their potential use in HSI processing, specifically in classification or segmentation. Recently, Schroedinger eigenmaps (SE), which is built upon LE, has been introduced as a semisupervised classification method. In SE, the former Laplacian operator is replaced by the Schroedinger operator. The Schroedinger operator includes by definition, a potential term V that steers the transformation in certain directions improving the separability between classes. In this regard, we propose a methodology for target detection that is not based on the traditional schemes and that does not need the estimation of statistical or geometric parameters. This method is based on SE, where the potential term V is taken into consideration to include the prior knowledge about the target class and use it to steer the transformation in directions where the target location in the new space is known and the separability between target and background is augmented. An initial study of how SE can be used in a target detection scheme for HSI is shown here. In-scene pixel and spectral signature detection approaches are presented. The HSI data used comprise various target panels for testing simultaneous detection of multiple objects with different complexities.
Detection of multiple airborne targets from multisensor data
NASA Astrophysics Data System (ADS)
Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf
1995-08-01
Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.
The simulation study on optical target laser active detection performance
NASA Astrophysics Data System (ADS)
Li, Ying-chun; Hou, Zhao-fei; Fan, Youchen
2014-12-01
According to the working principle of laser active detection system, the paper establishes the optical target laser active detection simulation system, carry out the simulation study on the detection process and detection performance of the system. For instance, the performance model such as the laser emitting, the laser propagation in the atmosphere, the reflection of optical target, the receiver detection system, the signal processing and recognition. We focus on the analysis and modeling the relationship between the laser emitting angle and defocus amount and "cat eye" effect echo laser in the reflection of optical target. Further, in the paper some performance index such as operating range, SNR and the probability of the system have been simulated. The parameters including laser emitting parameters, the reflection of the optical target and the laser propagation in the atmosphere which make a great influence on the performance of the optical target laser active detection system. Finally, using the object-oriented software design methods, the laser active detection system with the opening type, complete function and operating platform, realizes the process simulation that the detection system detect and recognize the optical target, complete the performance simulation of each subsystem, and generate the data report and the graph. It can make the laser active detection system performance models more intuitive because of the visible simulation process. The simulation data obtained from the system provide a reference to adjust the structure of the system parameters. And it provides theoretical and technical support for the top level design of the optical target laser active detection system and performance index optimization.
Method calibration of the model 13145 infrared target projectors
NASA Astrophysics Data System (ADS)
Huang, Jianxia; Gao, Yuan; Han, Ying
2014-11-01
The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
NASA Astrophysics Data System (ADS)
Small, Meagan C.; Aytenfisu, Asaminew H.; Lin, Fang-Yu; He, Xibing; MacKerell, Alexander D.
2017-04-01
The majority of computer simulations exploring biomolecular function employ Class I additive force fields (FF), which do not treat polarization explicitly. Accordingly, much effort has been made into developing models that go beyond the additive approximation. Development and optimization of the Drude polarizable FF has yielded parameters for selected lipids, proteins, DNA and a limited number of carbohydrates. The work presented here details parametrization of aliphatic aldehydes and ketones (viz. acetaldehyde, propionaldehyde, butaryaldehyde, isobutaryaldehyde, acetone, and butanone) as well as their associated acyclic sugars ( d-allose and d-psicose). LJ parameters are optimized targeting experimental heats of vaporization and molecular volumes, while the electrostatic parameters are optimized targeting QM water interactions, dipole moments, and molecular polarizabilities. Bonded parameters are targeted to both QM and crystal survey values, with the models for ketones and aldehydes shown to be in good agreement with QM and experimental target data. The reported heats of vaporization and molecular volumes represent a compromise between the studied model compounds. Simulations of the model compounds show an increase in the magnitude and the fluctuations of the dipole moments in moving from gas phase to condensed phases, which is a phenomenon that the additive FF is intrinsically unable to reproduce. The result is a polarizable model for aliphatic ketones and aldehydes including the acyclic sugars d-allose and d-psicose, thereby extending the available biomolecules in the Drude polarizable FF.
Small, Meagan C; Aytenfisu, Asaminew H; Lin, Fang-Yu; He, Xibing; MacKerell, Alexander D
2017-04-01
The majority of computer simulations exploring biomolecular function employ Class I additive force fields (FF), which do not treat polarization explicitly. Accordingly, much effort has been made into developing models that go beyond the additive approximation. Development and optimization of the Drude polarizable FF has yielded parameters for selected lipids, proteins, DNA and a limited number of carbohydrates. The work presented here details parametrization of aliphatic aldehydes and ketones (viz. acetaldehyde, propionaldehyde, butaryaldehyde, isobutaryaldehyde, acetone, and butanone) as well as their associated acyclic sugars (D-allose and D-psicose). LJ parameters are optimized targeting experimental heats of vaporization and molecular volumes, while the electrostatic parameters are optimized targeting QM water interactions, dipole moments, and molecular polarizabilities. Bonded parameters are targeted to both QM and crystal survey values, with the models for ketones and aldehydes shown to be in good agreement with QM and experimental target data. The reported heats of vaporization and molecular volumes represent a compromise between the studied model compounds. Simulations of the model compounds show an increase in the magnitude and the fluctuations of the dipole moments in moving from gas phase to condensed phases, which is a phenomenon that the additive FF is intrinsically unable to reproduce. The result is a polarizable model for aliphatic ketones and aldehydes including the acyclic sugars D-allose and D-psicose, thereby extending the available biomolecules in the Drude polarizable FF.
Small, Meagan C.; Aytenfisu, Asaminew H.; Lin, Fang-Yu; He, Xibing; MacKerell, Alexander D.
2017-01-01
The majority of computer simulations exploring biomolecular function employ Class I additive force fields (FF), which do not treat polarization explicitly. Accordingly, much effort has been made into developing models that go beyond the additive approximation. Development and optimization of the Drude polarizable FF has yielded parameters for selected lipids, proteins, DNA and a limited number of carbohydrates. The work presented here details parametrization of aliphatic aldehydes and ketones (viz. acetaldehyde, propionaldehyde, butaryaldehyde, isobutaryaldehyde, acetone, and butanone) as well as their associated acyclic sugars (D-allose and D-psicose). LJ parameters are optimized targeting experimental heats of vaporization and molecular volumes, while the electrostatic parameters are optimized targeting QM water interactions, dipole moments, and molecular polarizabilities. Bonded parameters are targeted to both QM and crystal survey values, with the models for ketones and aldehydes shown to be in good agreement with QM and experimental target data. The reported heats of vaporization and molecular volumes represent a compromise between the studied model compounds. Simulations of the model compounds show an increase in the magnitude and the fluctuations of the dipole moments in moving from gas phase to condensed phases, which is a phenomenon that the additive FF is intrinsically unable to reproduce. The result is a polarizable model for aliphatic ketones and aldehydes including the acyclic sugars D-allose and D-psicose, thereby extending the available biomolecules in the Drude polarizable FF. PMID:28190218
González-Díaz, Humberto; Munteanu, Cristian R; Postelnicu, Lucian; Prado-Prado, Francisco; Gestal, Marcos; Pazos, Alejandro
2012-03-01
Lipid-Binding Proteins (LIBPs) or Fatty Acid-Binding Proteins (FABPs) play an important role in many diseases such as different types of cancer, kidney injury, atherosclerosis, diabetes, intestinal ischemia and parasitic infections. Thus, the computational methods that can predict LIBPs based on 3D structure parameters became a goal of major importance for drug-target discovery, vaccine design and biomarker selection. In addition, the Protein Data Bank (PDB) contains 3000+ protein 3D structures with unknown function. This list, as well as new experimental outcomes in proteomics research, is a very interesting source to discover relevant proteins, including LIBPs. However, to the best of our knowledge, there are no general models to predict new LIBPs based on 3D structures. We developed new Quantitative Structure-Activity Relationship (QSAR) models based on 3D electrostatic parameters of 1801 different proteins, including 801 LIBPs. We calculated these electrostatic parameters with the MARCH-INSIDE software and they correspond to the entire protein or to specific protein regions named core, inner, middle, and surface. We used these parameters as inputs to develop a simple Linear Discriminant Analysis (LDA) classifier to discriminate 3D structure of LIBPs from other proteins. We implemented this predictor in the web server named LIBP-Pred, freely available at , along with other important web servers of the Bio-AIMS portal. The users can carry out an automatic retrieval of protein structures from PDB or upload their custom protein structural models from their disk created with LOMETS server. We demonstrated the PDB mining option performing a predictive study of 2000+ proteins with unknown function. Interesting results regarding the discovery of new Cancer Biomarkers in humans or drug targets in parasites have been discussed here in this sense.
Kim, Jeehyun; Kim, Jung Hoon; Yoon, Soon Ho; Choi, Won Seok; Kim, Young Jae; Han, Joon Koo; Choi, Byung-Ihn
2015-12-01
The aim of this study was to assess the feasibility of using dynamic contrast-enhanced ultrasound (DCE-US) with a 3-D transducer to evaluate therapeutic responses to targeted therapy. Rabbits with hepatic VX2 carcinomas, divided into a treatment group (n = 22, 30 mg/kg/d sorafenib) and a control group (n = 13), were evaluated with DCE-US using 2-D and 3-D transducers and computed tomography (CT) perfusion imaging at baseline and 1 d after the first treatment. Perfusion parameters were collected, and correlations between parameters were analyzed. In the treatment group, both volumetric and 2-D DCE-US perfusion parameters, including peak intensity (33.2 ± 19.9 vs. 16.6 ± 10.7, 63.7 ± 20.0 vs. 30.1 ± 19.8), slope (15.3 ± 12.4 vs. 5.7 ± 4.5, 37.3 ± 20.4 vs. 15.7 ± 13.0) and area under the curve (AUC; 1004.1 ± 560.3 vs. 611.4 ± 421.1, 1332.2 ± 708.3 vs. 670.4 ± 388.3), had significantly decreased 1 d after the first treatment (p = 0.00). In the control group, 2-D DCE-US revealed that peak intensity, time to peak and slope had significantly changed (p < 0.05); however, volumetric DCE-US revealed that peak intensity, time-intensity AUC, AUC during wash-in and AUC during wash-out had significantly changed (p = 0.00). CT perfusion imaging parameters, including blood flow, blood volume and permeability of the capillary vessel surface, had significantly decreased in the treatment group (p = 0.00); however, in the control group, peak intensity and blood volume had significantly increased (p = 0.00). It is feasible to use DCE-US with a 3-D transducer to predict early therapeutic response after targeted therapy because perfusion parameters, including peak intensity, slope and AUC, significantly decreased, which is similar to the trend observed for 2-D DCE-US and CT perfusion imaging parameters. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuffer, David
We discuss injection of 800 MeV proton beam from PIP-II into the production target for Mu2e-II, assuming a targeting and μ production scenario similar to mu2e. The incoming beam trajectory must be modified from the mu2e parameters to match the focusing fields. Adding a vertical deflection at injection enables the injected beam to reach the target. Other differences from the mu2e system must be considered, including changes in the target structure, the radiation shielding and beam dump/absorber. H- beam should be stripped to p+. Other variations are discussed.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Lateral position detection and control for friction stir systems
Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.
2012-06-05
An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Lateral position detection and control for friction stir systems
Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL
2011-11-08
Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imae, T; Haga, A; Saotome, N
Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions ofmore » multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.
There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less
Feature Extraction for Pose Estimation. A Comparison Between Synthetic and Real IR Imagery
1991-12-01
determine the orientation of the sensor relative to the target ....... ........................ 33 4. Effects of changing sensor and target parameters...Reference object is a T-62 tank facing the viewer (sensor/target parameters set equal to zero). NOTE: Changing the target parameters produces...anomalous results. For these images, the field of view (FOV) was not changed .......................... 35 5. Image anomalies from changing the target
Measurement of absolute laser energy absorption by nano-structured targets
NASA Astrophysics Data System (ADS)
Park, Jaebum; Tommasini, R.; London, R.; Bargsten, C.; Hollinger, R.; Capeluto, M. G.; Shlyaptsev, V. N.; Rocca, J. J.
2017-10-01
Nano-structured targets have been reported to allow the realization of extreme plasma conditions using table top lasers, and have gained much interest as a platform to investigate the ultra-high energy density plasmas (>100 MJ/cm3) . One reason for these targets to achieve extreme conditions is increased laser energy absorption (LEA). The absolute LEA by nano-structured targets has been measured for the first time and compared to that by foil targets. The experimental results, including the effects of target parameters on the LEA, will be presented. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52097NA27344, and funded by LDRD (#15-ERD-054).
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Seven protective miRNA signatures for prognosis of cervical cancer.
Liu, Bei; Ding, Jin-Feng; Luo, Jian; Lu, Li; Yang, Fen; Tan, Xiao-Dong
2016-08-30
Cervical cancer is the second cause of cancer death in females in their 20s and 30s, but there were limited studies about its prognosis. This study aims to identify miRNA related to prognosis and study their functions. TCGA data of patients with cervical cancer were used to build univariate Cox's model with single clinical parameter or miRNA expression level. Multivariate Cox's model was built using both clinical information and miRNA expression levels. At last, STRING was used to enrich gene ontology or pathway for validated targets of significant miRNAs, and visualize the interactions among them. Using univariate Cox's model with clinical parameters, we found that two clinical parameters, tobacco use and clinical stage, and seven miRNAs were highly correlated with the survival status. Only using the expression level of miRNA signatures, the model could separate patients into high-risk and low-risk groups successfully. An optimal feature-selected model was proposed based on two clinical parameters and seven miRNAs. Functional analysis of these seven miRNAs showed they were associated to various pathways related to cancer, including MAPK, VEGF and P53 pathways. These results helped the research of identifying targets for targeted therapy which could potentially allow tailoring of treatment for cervical cancer patients.
Financial gains and risks in pay-for-performance bonus algorithms.
Cromwell, Jerry; Drozd, Edward M; Smith, Kevin; Trisolini, Michael
2007-01-01
Considerable attention has been given to evidence-based process indicators associated with quality of care, while much less attention has been given to the structure and key parameters of the various pay-for-performance (P4P) bonus and penalty arrangements using such measures. In this article we develop a general model of quality payment arrangements and discuss the advantages and disadvantages of the key parameters. We then conduct simulation analyses of four general P4P payment algorithms by varying seven parameters, including indicator weights, indicator intercorrelation, degree of uncertainty regarding intervention effectiveness, and initial baseline rates. Bonuses averaged over several indicators appear insensitive to weighting, correlation, and the number of indicators. The bonuses are sensitive to disease manager perceptions of intervention effectiveness, facing challenging targets, and the use of actual-to-target quality levels versus rates of improvement over baseline.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Radar investigation of asteroids
NASA Technical Reports Server (NTRS)
Ostro, S. J.
1981-01-01
Radar investigations were conducted of selected minor planets, including: (1) observations during 1981-82 of 10 potential targets (2 Pallas, 8 Flora, 12 Victoria, 15 Eunomia, 19 Fortuna, 22 Kalliope, 132 Aethra, 219 Thusnelda, 433 Eros, and 2100 Ra-Shalom); and (2) continued analyses of observational data obtained during 1980-81 for 10 other asteroids (4 Vesta, 7 Iris, 16 Psyche, 75 Eurydike, 97 Klotho, 216 Kleopatra, 1685 Toro, 1862 Apollo, 1865 Cerberus, and 1915 Quetzalcoatl). Scientific objectives include estimation of echo strength, polarization, spectral shape, spectral bandwidth, and Doppler shift. These measurements: (1) yield estimates of target size, shape, and spin vector; (2) place constraints on topography, morphology, and composition of the planetary surface; (3) yield refined estimates of target orbital parameters; (4) reveal the presence of asteroidal satellites.
Role of vision in aperture closure control during reach-to-grasp movements.
Rand, Miya K; Lemay, Martin; Squire, Linda M; Shimansky, Yury P; Stelmach, George E
2007-08-01
We have previously shown that the distance from the hand to the target at which finger closure is initiated during the reach (aperture closure distance) depends on the amplitude of peak aperture, as well as hand velocity and acceleration. This dependence suggests the existence of a control law according to which a decision to initiate finger closure during the reach is made when the hand distance to target crosses a threshold that is a function of the above movement-related parameters. The present study examined whether the control law is affected by manipulating the visibility of the hand and the target. Young adults made reach-to-grasp movements to a dowel under conditions in which the target or the hand or both were either visible or not visible. Reaching for and grasping a target when the hand and/or target were not visible significantly increased transport time and widened peak aperture. Aperture closure distance was significantly lengthened and wrist peak velocity was decreased only when the target was not visible. Further analysis showed that the control law was significantly different between the visibility-related conditions. When either the hand or target was not visible, the aperture closure distance systematically increased compared to its value for the same amplitude of peak aperture, hand velocity, and acceleration under full visibility. This implies an increase in the distance-related safety margin for grasping when the hand or target is not visible. It has been also found that the same control law can be applied to all conditions, if variables describing hand and target visibility were included in the control law model, as the parameters of the task-related environmental context, in addition to the above movement-related parameters. This suggests that that the CNS utilizes those variables for controlling grasp initiation based on a general control law.
Role of vision in aperture closure control during reach-to-grasp movements
Rand, Miya K.; Lemay, Martin; Squire, Linda M.; Shimansky, Yury P.; Stelmach, George E.
2007-01-01
We have previously shown that the distance from the hand to the target at which finger closure is initiated during the reach (aperture closure distance) depends on the amplitude of peak aperture, as well as hand velocity and acceleration. This dependence suggests the existence of a control law according to which a decision to initiate finger closure during the reach is made when the hand distance to target crosses a threshold that is a function of the above movement-related parameters. The present study examined whether the control law is affected by manipulating the visibility of the hand and the target. Young adults made reach-to-grasp movements to a dowel under conditions in which the target or the hand or both were either visible or not visible. Reaching for and grasping a target when the hand and/or target were not visible significantly increased transport time and widened peak aperture. Aperture closure distance was significantly lengthened and wrist peak velocity was decreased only when the target was not visible. Further analysis showed that the control law was significantly different between the visibility-related conditions. When either the hand or target was not visible, the aperture closure distance systematically increased compared to its value for the same amplitude of peak aperture, hand velocity, and acceleration under full visibility. This implies an increase in the distance-related safety margin for grasping when the hand or target is not visible. It has been also found that the same control law can be applied to all conditions, if variables describing hand and target visibility were included in the control law model, as the parameters of the task-related environmental context, in addition to the above movement-related parameters. This suggests that that the CNS utilizes those variables for controlling grasp initiation based on a general control law. PMID:17476491
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Design of ligand-targeted nanoparticles for enhanced cancer targeting
NASA Astrophysics Data System (ADS)
Stefanick, Jared F.
Ligand-targeted nanoparticles are increasingly used as drug delivery vehicles for cancer therapy, yet have not consistently produced successful clinical outcomes. Although these inconsistencies may arise from differences in disease models and target receptors, nanoparticle design parameters can significantly influence therapeutic efficacy. By employing a multifaceted synthetic strategy to prepare peptide-targeted nanoparticles with high purity, reproducibility, and precisely controlled stoichiometry of functionalities, this work evaluates the roles of polyethylene glycol (PEG) coating, ethylene glycol (EG) peptide-linker length, peptide hydrophilicity, peptide density, and nanoparticle size on tumor targeting in a systematic manner. These parameters were analyzed in multiple disease models by targeting human epidermal growth factor receptor 2 (HER2) in breast cancer and very late antigen-4 (VLA-4) in multiple myeloma to demonstrate the widespread applicability of this approach. By increasing the hydrophilicity of the targeting peptide sequence and simultaneously optimizing the EG peptide-linker length, the in vitro cellular uptake of targeted liposomes was significantly enhanced. Specifically, including a short oligolysine chain adjacent to the targeting peptide sequence effectively increased cellular uptake ~80-fold using an EG6 peptide-linker compared to ~10-fold using an EG45 linker. In vivo, targeted liposomes prepared in a traditional manner lacking the oligolysine chain demonstrated similar biodistribution and tumor uptake to non-targeted liposomes. However, by including the oligolysine chain, targeted liposomes using an EG45 linker significantly improved tumor uptake ~8-fold over non-targeted liposomes, while the use of an EG6 linker decreased tumor accumulation and uptake, owing to differences in cellular uptake kinetics, clearance mechanisms, and binding site barrier effects. To further improve tumor targeting and enhance the selectivity of targeted nanoparticles, a dual-receptor targeted approach was evaluated by targeting multiple cell surface receptors simultaneously. Liposomes functionalized with two distinct peptide antagonists to target VLA-4 and Leukocyte Peyer's Patch Adhesion Molecule-1 (LPAM-1) demonstrated synergistically enhanced cellular uptake by cells overexpressing both target receptors and negligible uptake by cells that do not simultaneously express both receptors, providing a strategy to improve selectivity over conventional single receptor-targeted designs. Taken together, this process of systematic optimization of well-defined nanoparticle drug delivery systems has the potential to improve cancer therapy for a broader patient population.
Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P
2015-10-01
Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.
Data communications in a parallel active messaging interface of a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Data communications in a parallel active messaging interface ('PAMI') or a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution of a compute node, including specification of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications instruction, the instruction characterized by instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance witht the instruction type, the transfer data from the origin endpoin to the target endpoint.
Using Rutherford Backscattering Spectroscopy to Characterize Targets for MTW
NASA Astrophysics Data System (ADS)
Brown, Gunnar; Stockler, Barak; Ward, Ryan; Freeman, Charlie; Padalino, Stephen; Stillman, Collin; Ivancic, Steven; Reagan, S. P.; Sangster, T. C.
2017-10-01
A study is underway to determine the composition and thickness of targets used at the Multiterawatt (MTW) laser facility at the Laboratory for Laser Energetics (LLE) using Rutherford backscattering spectroscopy (RBS). In RBS, an ion beam is incident on a sample and the scattered ions are detected with a surface barrier detector. The resulting energy spectra of the scattered ions can be analyzed to determine important parameters of the target including elemental composition and thickness. Proton, helium and deuterium beams from the 1.7 MV Pelletron accelerator at SUNY Geneseo have been used to characterize several different targets for MTW, including CH and aluminum foils of varying thickness. RBS spectra were also obtained for a cylindrical iron buried-layer target with aluminum dopant which was mounted on a silicon carbide stalk. The computer program SIMNRA is used to analyze the spectra. This work was funded in part by a Grant from the DOE through the Laboratory for Laser Energetics.
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Technical Reports Server (NTRS)
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi
2015-01-01
The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
In-field experiment of electro-hydraulic tillage depth draft-position mixed control on tractor
NASA Astrophysics Data System (ADS)
Han, Jiangyi; Xia, Changgao; Shang, Gaogao; Gao, Xiang
2017-12-01
The soil condition and condition of the plow affect the tillage resistance and the maximum traction of tractor. In order to improve the adaptability of tractor tillage depth control, a multi-parameter control strategy is proposed that included tillage depth target, draft force aim and draft-position mixed ratio. In the strategy, the resistance coefficient was used to adjust the draft force target. Then, based on a JINMA1204 tractor, the electro-hydraulic hitch prototype is constructed that could set control parameters.. The fuzzy controller of draft-position mixed control is designed. After that, in-field experiments of position control was carried on, and the result of experiment shows the error of tillage depth was less than ±20mm. The experiment of draft-position control shown that the draft force and the tillage depth could be adjust by multi-parameter such as tillage depth, resistance coefficient and draft-position mixed coefficient. So that, the multi-parameter control strategy could improve the adaptability of tillage depth control in various soils and plow condition.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Scientific Analysis of Data for the ISTP/SOLARMAX Programs
NASA Technical Reports Server (NTRS)
Lazarus, Alan J.
2001-01-01
This Grant supplemented our work on data analysis from the Wind spacecraft which was one of the ISTRIA fleet of spacecraft. It was targeted at observations related to the time of solar maximum in 2000. The work we proposed to do under this grant included comparison of solar wind parameters obtained from different spacecraft in order to establish correlation lengths appropriate to the solar wind and also to compare parameters to explore solar cycle effects.
Passive and iontophoretic transport through the skin polar pathway.
Li, S K; Peck, K D
2013-01-01
The purpose of the present article is to briefly recount the contributions of Prof. William I. Higuchi to the area of skin transport. These contributions include developing fundamental knowledge of the barrier properties of the stratum corneum, mechanisms of skin transport, concentration gradient across skin in topical drug applications that target the viable epidermal layer, and permeation enhancement by chemical and electrical means. The complex and changeable nature of the skin barrier makes it difficult to assess and characterize the critical parameters that influence skin permeation. The systematic and mechanistic approaches taken by Dr. Higuchi in studying these parameters provided fundamental knowledge in this area and had a measured and lasting influence upon this field of study. This article specifically reviews the validation and characterization of the polar permeation pathway, the mechanistic model of skin transport, the influence of the dermis on the target skin concentration concept, and iontophoretic transport across the polar pathway of skin including the effects of electroosmosis and electropermeabilization. © 2013 S. Karger AG, Basel.
Mileva, Viktoria R.; Little, Anthony C.; Roberts, S. Craig
2017-01-01
Non-verbal behaviours, including voice characteristics during speech, are an important way to communicate social status. Research suggests that individuals can obtain high social status through dominance (using force and intimidation) or through prestige (by being knowledgeable and skilful). However, little is known regarding differences in the vocal behaviour of men and women in response to dominant and prestigious individuals. Here, we tested within-subject differences in vocal parameters of interviewees during simulated job interviews with dominant, prestigious, and neutral employers (targets), while responding to questions which were classified as introductory, personal, and interpersonal. We found that vocal modulations were apparent between responses to the neutral and high-status targets, with participants, especially those who perceived themselves as low in dominance, increasing fundamental frequency (F0) in response to the dominant and prestigious targets relative to the neutral target. Self-perceived prestige, however, was less related to contextual vocal modulations than self-perceived dominance. Finally, we found that differences in the context of the interview questions participants were asked to respond to (introductory, personal, interpersonal), also affected their vocal parameters, being more prominent in responses to personal and interpersonal questions. Overall, our results suggest that people adjust their vocal parameters according to the perceived social status of the listener as well as their own self-perceived social status. PMID:28614413
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...
2017-01-24
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
Microgravity Impact Experiments: The Prime Campaign on the NASA KC-135
NASA Astrophysics Data System (ADS)
Colwell, Joshua E.; Sture, Stein; Lemos, Andreas R.
2002-11-01
Low velocity collisions (v less than 100 m/s) occur in a number of astrophysical contexts, including planetary rings, protoplanetary disks, the Kuiper belt of comets, and in secondary cratering events on asteroids and planetary satellites. In most of these situations the surface gravity of the target is less than a few per cent of 1 g. Asteroids and planetary satellites are observed to have a regolith consisting of loose, unconsolidated material. Planetary ring particles likely are also coated with dust based on observations of dust within ring systems. The formation of planetesimals in protoplanetary disks begins with the accretion of dust particles. The response of the surface dust layer to collisions in the near absence of gravity is necessary for understanding the evolution of these systems. The Collisions Into Dust Experiment (COLLIDE) performs six impact experiments into simulated regolith in microgravity conditions on the space shuttle. The parameter space to be explored is quite large, including effects such as impactor mass and velocity, impact angle, target porosity, size distribution, and particle shape. We have developed an experiment, the Physics of Regolith Impacts in Microgravity Experiment (PRIME), that is analogous to COLLIDE that is optimized for flight on the NASA KC-135 reduced gravity aircraft. The KC-135 environment provides the advantage of more rapid turnover between experiments, allowing a broader range of parameters to be studied quickly, and more room for the experiment so that more impact experiments can be performed each flight. The acceleration environment of the KC-135 is not as stable and minimal as on the space shuttle, and this requires impact velocities to be higher than the minimum achievable with COLLIDE. The experiment consists of an evacuated PRIME Impact Chamber (PIC) with an aluminum base plate and acrylic sides and top. A target tray, launcher, and mirror mount to the base plate. The launcher may be positioned to allow for impacts at angles of 30, 45, 60, and 90 degrees with respect to the target surface. The target material is contained in a 10 cm by 10 cm by 2 cm tray with a rotating door that is opened via a mechanical feed-through on the base plate. A spring-loaded inner door provides uniform compression on the target material prior to operation of the experiment to keep the material from settling or locking up during vibrations prior to the experiment. Data is recorded with the NASA high speed video camera. Frame rates are selected according to the impact parameters. The direct camera view is orthogonal to the projectile line of motion, and the mirrors within the PIC provide a view normal to the target surface. The spring-loaded launchers allow for projectile speeds between 10 cm/s and 500 cm/s with a variety of impactor sizes and densities. On each flight 8 PICs will be used, each one with a different set of impact parameters. Additional information is included in the original extended abstract.
Implosion of multilayered cylindrical targets driven by intense heavy ion beams.
Piriz, A R; Portugues, R F; Tahir, N A; Hoffmann, D H H
2002-11-01
An analytical model for the implosion of a multilayered cylindrical target driven by an intense heavy ion beam has been developed. The target is composed of a cylinder of frozen hydrogen or deuterium, which is enclosed in a thick shell of solid lead. This target has been designed for future high-energy-density matter experiments to be carried out at the Gesellschaft für Schwerionenforschung, Darmstadt. The model describes the implosion dynamics including the motion of the incident shock and the first reflected shock and allows for calculation of the physical conditions of the hydrogen at stagnation. The model predicts that the conditions of the compressed hydrogen are not sensitive to significant variations in target and beam parameters. These predictions are confirmed by one-dimensional numerical simulations and thus allow for a robust target design.
2013-01-01
Background Achieving target levels of laboratory parameters of bone and mineral metabolism in chronic kidney disease (CKD) patients is important but also difficult in those living with end-stage kidney disease. This study aimed to determine if there are age-related differences in chronic kidney disease-mineral and bone disorder (CKD-MBD) characteristics, including treatment practice in Hungarian dialysis patients. Methods Data were collected retrospectively from a large cohort of dialysis patients in Hungary. Patients on hemodialysis and peritoneal dialysis were also included. The enrolled patients were allocated into two groups based on their age (<65 years and ≥65 years). Characteristics of the age groups and differences in disease-related (epidemiology, laboratory, and treatment practice) parameters between the groups were analyzed. Results A total of 5008 patients were included in the analysis and the mean age was 63.4±14.2 years. A total of 47.2% of patients were women, 32.8% had diabetes, and 11.4% were on peritoneal dialysis. Diabetes (37.9% vs 27.3%), bone disease (42.9% vs 34.1%), and soft tissue calcification (56.3% vs 44.7%) were more prevalent in the older group than the younger group (p<0.001 for all). We found an inverse relationship between age and parathyroid hormone (PTH) levels (p<0.001). Serum PTH levels were lower in patients with diabetes compared with those without diabetes below 80 years (p<0.001). Diabetes and age were independently associated with serum PTH levels (interaction: diabetes × age groups, p=0.138). Older patients were more likely than younger patients to achieve laboratory target ranges for each parameter (Ca: 66.9% vs 62.1%, p<0.001; PO4: 52.6% vs 49.2%, p<0.05; and PTH: 50.6% vs 46.6%, p<0.01), and for combined parameters (19.8% vs 15.8%, p<0.001). Older patients were less likely to receive related medication than younger patients (66.9% vs 79.7%, p<0.001). Conclusions The achievement of laboratory target ranges for bone and mineral metabolism and clinical practice in CKD depends on the age of the patients. A greater proportion of older patients met target criteria and received less medication compared with younger patients. PMID:23865464
External calibration of polarimetric radars using point and distributed targets
NASA Technical Reports Server (NTRS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-01-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
External calibration of polarimetric radars using point and distributed targets
NASA Astrophysics Data System (ADS)
Yueh, S. H.; Kong, J. A.; Shin, R. T.
1991-08-01
Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.
Automatic tool alignment in a backscatter X-ray scanning system
Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.
2015-11-17
Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a medical device is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.
Automatic tool alignment in a backscatter x-ray scanning system
Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.
2015-06-16
Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a tool is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.
A quantitative framework for the forward design of synthetic miRNA circuits.
Bloom, Ryan J; Winkler, Sally M; Smolke, Christina D
2014-11-01
Synthetic genetic circuits incorporating regulatory components based on RNA interference (RNAi) have been used in a variety of systems. A comprehensive understanding of the parameters that determine the relationship between microRNA (miRNA) and target expression levels is lacking. We describe a quantitative framework supporting the forward engineering of gene circuits that incorporate RNAi-based regulatory components in mammalian cells. We developed a model that captures the quantitative relationship between miRNA and target gene expression levels as a function of parameters, including mRNA half-life and miRNA target-site number. We extended the model to synthetic circuits that incorporate protein-responsive miRNA switches and designed an optimized miRNA-based protein concentration detector circuit that noninvasively measures small changes in the nuclear concentration of β-catenin owing to induction of the Wnt signaling pathway. Our results highlight the importance of methods for guiding the quantitative design of genetic circuits to achieve robust, reliable and predictable behaviors in mammalian cells.
A real-time optical tracking and measurement processing system for flying targets.
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.
A Real-Time Optical Tracking and Measurement Processing System for Flying Targets
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control. PMID:24987748
The development of high-performance alkali-hybrid polarized He 3 targets for electron scattering
Singh, Jaideep T.; Dolph, Peter A.M.; Tobias, William Al; ...
2015-05-01
We present the development of high-performance polarized ³He targets for use in electron scattering experiments that utilize the technique of alkali-hybrid spin-exchange optical pumping. We include data obtained during the characterization of 24 separate target cells, each of which was constructed while preparing for one of four experiments at Jefferson Laboratory in Newport News, Virginia. The results presented here document dramatic improvement in the performance of polarized ³He targets, as well as the target properties and operating parameters that made those improvements possible. Included in our measurements were determinations of the so-called X-factors that quantify a temperature-dependent and as-yet poorly understood spin-relaxation mechanism that limits the maximum achievable ³He polarization to well under 100%. The presence of this spin-relaxation mechanism was clearly evident in our data. We also present results from a simulation of the alkali-hydrid spin-exchange optical pumping process that was developed to provide guidance in the design of these targets. Good agreement with actual performance was obtained by including details such as off-resonant optical pumping. Now benchmarked against experimental data, the simulation is useful for the design of future targets. Included in our results is a measurement of the K- ³He spin-exchange rate coefficientmore » $$k^\\mathrm{K}_\\mathrm{se} = \\left ( 7.46 \\pm 0.62 \\right )\\!\\times\\!10^{-20}\\ \\mathrm{cm^3/s}$$ over the temperature range 503 K to 563 K.« less
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
Motion prediction of a non-cooperative space target
NASA Astrophysics Data System (ADS)
Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan
2018-01-01
Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.
Yates, Kimberly K.; DuFore, Christopher M.; Robbins, Lisa L.
2013-01-01
Use of different approaches for manipulating seawater chemistry during ocean acidification experiments has confounded comparison of results from various experimental studies. Some of these discrepancies have been attributed to whether addition of acid (such as hydrochloric acid, HCl) or carbon dioxide (CO2) gas has been used to adjust carbonate system parameters. Experimental simulations of carbonate system parameter scenarios for the years 1766, 2007, and 2100 were performed using the carbonate speciation program CO2SYS to demonstrate the variation in seawater chemistry that can result from use of these approaches. Results showed that carbonate system parameters were 3 percent and 8 percent lower than target values in closed-system acid additions, and 1 percent and 5 percent higher in closed-system CO2 additions for the 2007 and 2100 simulations, respectively. Open-system simulations showed that carbonate system parameters can deviate by up to 52 percent to 70 percent from target values in both acid addition and CO2 addition experiments. Results from simulations for the year 2100 were applied to empirically derived equations that relate biogenic calcification to carbonate system parameters for calcifying marine organisms including coccolithophores, corals, and foraminifera. Calculated calcification rates for coccolithophores, corals, and foraminifera differed from rates at target conditions by 0.5 percent to 2.5 percent in closed-system CO2 gas additions, from 0.8 percent to 15 percent in the closed-system acid additions, from 4.8 percent to 94 percent in open-system acid additions, and from 7 percent to 142 percent in open-system CO2 additions.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
"Second Chance": Some Theoretical and Empirical Remarks.
ERIC Educational Resources Information Center
Inbar, Dan E.; Sever, Rita
1986-01-01
Presents a conceptual framework of second-chance systems analyzable in terms of several basic parameters (targeted population, declared goals, processes, options for students, evaluation criteria, and implications for the regular system). Uses this framework to analyze an Israeli external high school, the subject of a large-scale study. Includes 3…
Progress on LMJ targets for ignition
NASA Astrophysics Data System (ADS)
Cherfils-Clérouin, C.; Boniface, C.; Bonnefille, M.; Fremerye, P.; Galmiche, D.; Gauthier, P.; Giorla, J.; Lambert, F.; Laffite, S.; Liberatore, S.; Loiseau, P.; Malinie, G.; Masse, L.; Masson-Laborde, P. E.; Monteil, M. C.; Poggi, F.; Seytor, P.; Wagon, F.; Willien, J. L.
2010-08-01
Targets designed to produce ignition on the Laser MegaJoule are presented. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-shaped cocktail hohlraum. 1D and 2D robustness evaluations of these different targets shed light on critical points for ignition, that can be traded off by tightening some specifications or by preliminary experimental and numerical tuning experiments.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
A Parametric Study on Using Active Debris Removal to Stabilize the Future LEO Debris Environment
NASA Technical Reports Server (NTRS)
Liou, J.C.
2010-01-01
Recent analyses of the instability of the orbital debris population in the low Earth orbit (LEO) region and the collision between Iridium 33 and Cosmos 2251 have reignited the interest in using active debris removal (ADR) to remediate the environment. There are; however, monumental technical, resources, operational, legal, and political challenges in making economically viable ADR a reality. Before a consensus on the need for ADR can be reached, a careful analysis of the effectiveness of ADR must be conducted. The goal is to demonstrate the feasibility of using ADR to preserve the future environment and to guide its implementation to maximize the benefit-cost ratio. This paper describes a comprehensive sensitivity study on using ADR to stabilize the future LEO debris environment. The NASA long-term, orbital debris evolutionary model, LEGEND, is used to quantify the effects of many key parameters. These parameters include (1) the starting epoch of ADR implementation, (2) various target selection criteria, (3) the benefits of collision avoidance maneuvers, (4) the consequence of targeting specific inclination or altitude regimes, (5) the consequence of targeting specific classes of vehicles, and (6) the timescale of removal. Additional analyses on the importance of postmission disposal and how future launches might affect the requirements to stabilize the environment are also included.
Optical model with multiple band couplings using soft rotator structure
NASA Astrophysics Data System (ADS)
Martyanov, Dmitry; Soukhovitskii, Efrem; Capote, Roberto; Quesada, Jose Manuel; Chiba, Satoshi
2017-09-01
A new dispersive coupled-channel optical model (DCCOM) is derived that describes nucleon scattering on 238U and 232Th targets using a soft-rotator-model (SRM) description of the collective levels of the target nucleus. SRM Hamiltonian parameters are adjusted to the observed collective levels of the target nucleus. SRM nuclear wave functions (mixed in K quantum number) have been used to calculate coupling matrix elements of the generalized optical model. Five rotational bands are coupled: the ground-state band, β-, γ-, non-axial- bands, and a negative parity band. Such coupling scheme includes almost all levels below 1.2 MeV of excitation energy of targets. The "effective" deformations that define inter-band couplings are derived from SRM Hamiltonian parameters. Conservation of nuclear volume is enforced by introducing a monopolar deformed potential leading to additional couplings between rotational bands. The present DCCOM describes the total cross section differences between 238U and 232Th targets within experimental uncertainty from 50 keV up to 200 MeV of neutron incident energy. SRM couplings and volume conservation allow a precise calculation of the compound-nucleus (CN) formation cross sections, which is significantly different from the one calculated with rigid-rotor potentials with any number of coupled levels.
Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model
NASA Astrophysics Data System (ADS)
Arnow, Thomas L.; Geisler, Wilson S.
1996-04-01
A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.
NASA Astrophysics Data System (ADS)
Guex, Guillaume
2016-05-01
In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.
NASA Astrophysics Data System (ADS)
Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.
2016-02-01
A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.
Predictors of pneumothorax following endoscopic valve therapy in patients with severe emphysema.
Gompelmann, Daniela; Lim, Hyun-Ju; Eberhardt, Ralf; Gerovasili, Vasiliki; Herth, Felix Jf; Heussel, Claus Peter; Eichinger, Monika
2016-01-01
Endoscopic valve implantation is an effective treatment for patients with advanced emphysema. Despite the minimally invasive procedure, valve placement is associated with risks, the most common of which is pneumothorax. This study was designed to identify predictors of pneumothorax following endoscopic valve implantation. Preinterventional clinical measures (vital capacity, forced expiratory volume in 1 second, residual volume, total lung capacity, 6-minute walk test), qualitative computed tomography (CT) parameters (fissure integrity, blebs/bulla, subpleural nodules, pleural adhesions, partial atelectasis, fibrotic bands, emphysema type) and quantitative CT parameters (volume and low attenuation volume of the target lobe and the ipsilateral untreated lobe, target air trapping, ipsilateral lobe volume/hemithorax volume, collapsibility of the target lobe and the ipsilateral untreated lobe) were retrospectively evaluated in patients who underwent endoscopic valve placement (n=129). Regression analysis was performed to compare those who developed pneumothorax following valve therapy (n=46) with those who developed target lobe volume reduction without pneumothorax (n=83). Low attenuation volume% of ipsilateral untreated lobe (odds ratio [OR] =1.08, P=0.001), ipsilateral untreated lobe volume/hemithorax volume (OR =0.93, P=0.017), emphysema type (OR =0.26, P=0.018), pleural adhesions (OR =0.33, P=0.012) and residual volume (OR =1.58, P=0.012) were found to be significant predictors of pneumothorax. Fissure integrity (OR =1.16, P=0.075) and 6-minute walk test (OR =1.05, P=0.077) were also indicative of pneumothorax. The model including the aforementioned parameters predicted whether a patient would experience a pneumothorax 84% of the time (area under the curve =0.84). Clinical and CT parameters provide a promising tool to effectively identify patients at high risk of pneumothorax following endoscopic valve therapy.
Predictors of pneumothorax following endoscopic valve therapy in patients with severe emphysema
Gompelmann, Daniela; Lim, Hyun-ju; Eberhardt, Ralf; Gerovasili, Vasiliki; Herth, Felix JF; Heussel, Claus Peter; Eichinger, Monika
2016-01-01
Background Endoscopic valve implantation is an effective treatment for patients with advanced emphysema. Despite the minimally invasive procedure, valve placement is associated with risks, the most common of which is pneumothorax. This study was designed to identify predictors of pneumothorax following endoscopic valve implantation. Methods Preinterventional clinical measures (vital capacity, forced expiratory volume in 1 second, residual volume, total lung capacity, 6-minute walk test), qualitative computed tomography (CT) parameters (fissure integrity, blebs/bulla, subpleural nodules, pleural adhesions, partial atelectasis, fibrotic bands, emphysema type) and quantitative CT parameters (volume and low attenuation volume of the target lobe and the ipsilateral untreated lobe, target air trapping, ipsilateral lobe volume/hemithorax volume, collapsibility of the target lobe and the ipsilateral untreated lobe) were retrospectively evaluated in patients who underwent endoscopic valve placement (n=129). Regression analysis was performed to compare those who developed pneumothorax following valve therapy (n=46) with those who developed target lobe volume reduction without pneumothorax (n=83). Finding Low attenuation volume% of ipsilateral untreated lobe (odds ratio [OR] =1.08, P=0.001), ipsilateral untreated lobe volume/hemithorax volume (OR =0.93, P=0.017), emphysema type (OR =0.26, P=0.018), pleural adhesions (OR =0.33, P=0.012) and residual volume (OR =1.58, P=0.012) were found to be significant predictors of pneumothorax. Fissure integrity (OR =1.16, P=0.075) and 6-minute walk test (OR =1.05, P=0.077) were also indicative of pneumothorax. The model including the aforementioned parameters predicted whether a patient would experience a pneumothorax 84% of the time (area under the curve =0.84). Interpretation Clinical and CT parameters provide a promising tool to effectively identify patients at high risk of pneumothorax following endoscopic valve therapy. PMID:27536088
Synthetic aperture radar operator tactical target acquisition research
NASA Technical Reports Server (NTRS)
Hershberger, M. L.; Craig, D. W.
1978-01-01
A radar target acquisition research study was conducted to access the effects of two levels of 13 radar sensor, display, and mission parameters on operator tactical target acquisition. A saturated fractional-factorial screening design was employed to examine these parameters. Data analysis computed ETA squared values for main and second-order effects for the variables tested. Ranking of the research parameters in terms of importance to system design revealed four variables (radar coverage, radar resolution/multiple looks, display resolution, and display size) accounted for 50 percent of the target acquisition probability variance.
Stochastic inversion of cross-borehole radar data from metalliferous vein detection
NASA Astrophysics Data System (ADS)
Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui
2017-12-01
In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.
Data communications in a parallel active messaging interface of a parallel computer
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-10-29
Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.
Target-classification approach applied to active UXO sites
NASA Astrophysics Data System (ADS)
Shubitidze, F.; Fernández, J. P.; Shamatava, Irma; Barrowes, B. E.; O'Neill, K.
2013-06-01
This study is designed to illustrate the discrimination performance at two UXO active sites (Oklahoma's Fort Sill and the Massachusetts Military Reservation) of a set of advanced electromagnetic induction (EMI) inversion/discrimination models which include the orthonormalized volume magnetic source (ONVMS), joint diagonalization (JD), and differential evolution (DE) approaches and whose power and flexibility greatly exceed those of the simple dipole model. The Fort Sill site is highly contaminated by a mix of the following types of munitions: 37-mm target practice tracers, 60-mm illumination mortars, 75-mm and 4.5'' projectiles, 3.5'', 2.36'', and LAAW rockets, antitank mine fuzes with and without hex nuts, practice MK2 and M67 grenades, 2.5'' ballistic windshields, M2A1-mines with/without bases, M19-14 time fuzes, and 40-mm practice grenades with/without cartridges. The site at the MMR site contains targets of yet different sizes. In this work we apply our models to EMI data collected using the MetalMapper (MM) and 2 × 2 TEMTADS sensors. The data for each anomaly are inverted to extract estimates of the extrinsic and intrinsic parameters associated with each buried target. (The latter include the total volume magnetic source or NVMS, which relates to size, shape, and material properties; the former includes location, depth, and orientation). The estimated intrinsic parameters are then used for classification performed via library matching and the use of statistical classification algorithms; this process yielded prioritized dig-lists that were submitted to the Institute for Defense Analyses (IDA) for independent scoring. The models' classification performance is illustrated and assessed based on these independent evaluations.
Emergence and Utility of Nonspherical Particles in Biomedicine
Fish, Margaret B.; Thompson, Alex J.; Fromen, Catherine A.; Eniola-Adefeso, Omolola
2016-01-01
The importance of the size of targeted, spherical drug carriers has been previously explored and reviewed. Particle shape has emerged as an equally important parameter in determining the in vivo journey and efficiency of drug carrier systems. Researchers have invented techniques to better control the geometry of particles of many different materials, which have allowed for exploration of the role of particle geometry in the phases of drug delivery. The important biological processes include clearance by the immune system, trafficking to the target tissue, margination to the endothelial surface, interaction with the target cell, and controlled release of a payload. The review of current literature herein supports that particle shape can be altered to improve a system’s targeting efficiency. Non-spherical particles can harness the potential of targeted drug carriers by enhancing targeted site accumulation while simultaneously decreasing side effects and mitigating some limitations faced by spherical carriers. PMID:27182109
Cereal transformation through particle bombardment
NASA Technical Reports Server (NTRS)
Casas, A. M.; Kononowicz, A. K.; Bressan, R. A.; Hasegawa, P. M.; Mitchell, C. A. (Principal Investigator)
1995-01-01
The review focuses on experiments that lead to stable transformation in cereals using microprojectile bombardment. The discussion of biological factors that affect transformation examines target tissues and vector systems for gene transfer. The vector systems include reporter genes, selectable markers, genes of agronomic interest, and vector constructions. Other topics include physical parameters that affect DNA delivery, selection of stably transformed cells and plant regeneration, and analysis of gene expression and transmission to the progeny.
Investigation of Polarization Phase Difference Related to Forest Fields Characterizations
NASA Astrophysics Data System (ADS)
Majidi, M.; Maghsoudi, Y.
2013-09-01
The information content of Synthetic Aperture Radar (SAR) data significantly included in the radiometric polarization channels, hence polarimetric SAR data should be analyzed in relation with target structure. The importance of the phase difference between two co-polarized scattered signals due to the possible association between the biophysical parameters and the measured Polarization Phase Difference (PPD) statistics of the backscattered signal recorded components has been recognized in geophysical remote sensing. This paper examines two Radarsat-2 images statistics of the phase difference to describe the feasibility of relationship with the physical properties of scattering targets and tries to understand relevance of PPD statistics with various types of forest fields. As well as variation of incidence angle due to affecting on PPD statistics is investigated. The experimental forest pieces that are used in this research are characterized white pine (Pinus strobus L.), red pine (Pinus resinosa Ait.), jack pine (Pinus banksiana Lamb.), white spruce (Picea glauca (Moench Voss), black spruce (Picea mariana (Mill) B.S.P.), poplar (Populus L.), red oak (Quercus rubra L.) , aspen and ground vegetation. The experimental results show that despite of biophysical parameters have a wide diversity, PPD statistics are almost the same. Forest fields distributions as distributed targets have close to zero means regardless of the incidence angle. Also, The PPD distribution are function of both target and sensor parameters, but for more appropriate examination related to PPD statistics the observations should made in the leaf-off season or in bands with lower frequencies.
Vancomycin Dosing in Obese Patients: Special Considerations and Novel Dosing Strategies.
Durand, Cheryl; Bylo, Mary; Howard, Brian; Belliveau, Paul
2018-06-01
To review the literature regarding vancomycin pharmacokinetics in obese patients and strategies used to improve dosing in this population. PubMed, EMBASE (1974 to November 2017), and Google Scholar searches were conducted using the search terms vancomycin, obese, obesity, pharmacokinetics, strategy, and dosing. Additional articles were selected from reference lists of selected studies. Included articles were those published in English with a primary focus on vancomycin pharmacokinetic parameters in obese patients and practical vancomycin dosing strategies, clinical experiences, or challenges of dosing vancomycin in this population. Volume of distribution and clearance are the pharmacokinetic parameters that most often affect vancomycin dosing in obese patients; both are increased in this population. Challenges with dosing in obese patients include inconsistent and inadequate dosing, observations that the obese population may not be homogeneous, and reports of an increased likelihood of supratherapeutic trough concentrations. Investigators have revised and developed dosing and monitoring protocols to address these challenges. These approaches improved target trough attainment to varying degrees. Some of the vancomycin dosing approaches provided promising results in obese patients, but there were notable differences in methods used to develop these approaches, and sample sizes were small. Although some approaches can be considered for validation in individual institutions, further research is warranted. This may include validating approaches in larger populations with narrower obesity severity ranges, investigating target attainment in indication-specific target ranges, and evaluating the impact of different dosing weights and methods of creatinine clearance calculation.
A growing social network model in geographical space
NASA Astrophysics Data System (ADS)
Antonioni, Alberto; Tomassini, Marco
2017-09-01
In this work we propose a new model for the generation of social networks that includes their often ignored spatial aspects. The model is a growing one and links are created either taking space into account, or disregarding space and only considering the degree of target nodes. These two effects can be mixed linearly in arbitrary proportions through a parameter. We numerically show that for a given range of the combination parameter, and for given mean degree, the generated network class shares many important statistical features with those observed in actual social networks, including the spatial dependence of connections. Moreover, we show that the model provides a good qualitative fit to some measured social networks.
Intrinsic thermodynamics of ethoxzolamide inhibitor binding to human carbonic anhydrase XIII
2012-01-01
Background Human carbonic anhydrases (CAs) play crucial role in various physiological processes including carbon dioxide and hydrocarbon transport, acid homeostasis, biosynthetic reactions, and various pathological processes, especially tumor progression. Therefore, CAs are interesting targets for pharmaceutical research. The structure-activity relationships (SAR) of designed inhibitors require detailed thermodynamic and structural characterization of the binding reaction. Unfortunately, most publications list only the observed thermodynamic parameters that are significantly different from the intrinsic parameters. However, only intrinsic parameters could be used in the rational design and SAR of the novel compounds. Results Intrinsic binding parameters for several inhibitors, including ethoxzolamide, trifluoromethanesulfonamide, and acetazolamide, binding to recombinant human CA XIII isozyme were determined. The parameters were the intrinsic Gibbs free energy, enthalpy, entropy, and the heat capacity. They were determined by titration calorimetry and thermal shift assay in a wide pH and temperature range to dissect all linked protonation reaction contributions. Conclusions Precise determination of the inhibitor binding thermodynamics enabled correct intrinsic affinity and enthalpy ranking of the compounds and provided the means for SAR analysis of other rationally designed CA inhibitors. PMID:22676044
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Control of aperture closure during reach-to-grasp movements in parkinson’s disease
Rand, M. K.; Smiley-Oyen, A. L.; Shimansky, Y. P.; Bloedel, J. R.; Stelmach, G. E.
2007-01-01
This study examined whether the pattern of coordination between arm-reaching toward an object (hand transport) and the initiation of aperture closure for grasping is different between PD patients and healthy individuals, and whether that pattern is affected by the necessity to quickly adjust the reach-to-grasp movement in response to an unexpected shift of target location. Subjects reached for and grasped a vertical dowel, the location of which was indicated by illuminating one of the three dowels placed on a horizontal plane. In control conditions, target location was fixed during the trial. In perturbation conditions, target location was shifted instantaneously by switching the illumination to a different dowel during the reach. The hand distance from the target at which the subject initiated aperture closure (aperture closure distance) was similar for both the control and perturbation conditions within each group of subjects. However, that distance was significantly closer to the target in the PD group than in the control group. The timing of aperture closure initiation varied considerably across the trials in both groups of subjects. In contrast, aperture closure distance was relatively invariant, suggesting that aperture closure initiation was determined by spatial parameters of arm kinematics rather than temporal parameters. The linear regression analysis of aperture closure distance showed that the distance was highly predictable based on the following three parameters: the amplitude of maximum grip aperture, hand velocity, and hand acceleration. This result implies that a control law, the arguments of which include the above parameters, governs the initiation of aperture closure. Further analysis revealed that the control law was very similar between the subject groups under each condition as well as between the control and perturbation conditions for each group. Consequently, the shorter aperture closure distance observed in PD patients apparently is a result of the hypometria of their grip aperture and bradykinesia of hand transport movement, rather than a consequence of a deficit in transport-grasp coordination. It is also concluded that the perturbation of target location does not disrupt the transport-grasp coordination in either healthy individuals or PD patients. PMID:16307233
Control of aperture closure during reach-to-grasp movements in Parkinson's disease.
Rand, M K; Smiley-Oyen, A L; Shimansky, Y P; Bloedel, J R; Stelmach, G E
2006-01-01
This study examined whether the pattern of coordination between arm-reaching toward an object (hand transport) and the initiation of aperture closure for grasping is different between PD patients and healthy individuals, and whether that pattern is affected by the necessity to quickly adjust the reach-to-grasp movement in response to an unexpected shift of target location. Subjects reached for and grasped a vertical dowel, the location of which was indicated by illuminating one of the three dowels placed on a horizontal plane. In control conditions, target location was fixed during the trial. In perturbation conditions, target location was shifted instantaneously by switching the illumination to a different dowel during the reach. The hand distance from the target at which the subject initiated aperture closure (aperture closure distance) was similar for both the control and perturbation conditions within each group of subjects. However, that distance was significantly closer to the target in the PD group than in the control group. The timing of aperture closure initiation varied considerably across the trials in both groups of subjects. In contrast, aperture closure distance was relatively invariant, suggesting that aperture closure initiation was determined by spatial parameters of arm kinematics rather than temporal parameters. The linear regression analysis of aperture closure distance showed that the distance was highly predictable based on the following three parameters: the amplitude of maximum grip aperture, hand velocity, and hand acceleration. This result implies that a control law, the arguments of which include the above parameters, governs the initiation of aperture closure. Further analysis revealed that the control law was very similar between the subject groups under each condition as well as between the control and perturbation conditions for each group. Consequently, the shorter aperture closure distance observed in PD patients apparently is a result of the hypometria of their grip aperture and bradykinesia of hand transport movement, rather than a consequence of a deficit in transport-grasp coordination. It is also concluded that the perturbation of target location does not disrupt the transport-grasp coordination in either healthy individuals or PD patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Pinaki; Probst, Daniel; Pei, Yuanjiang
Fuels in the gasoline auto-ignition range (Research Octane Number (RON) > 60) have been demonstrated to be effective alternatives to diesel fuel in compression ignition engines. Such fuels allow more time for mixing with oxygen before combustion starts, owing to longer ignition delay. Moreover, by controlling fuel injection timing, it can be ensured that the in-cylinder mixture is “premixed enough” before combustion occurs to prevent soot formation while remaining “sufficiently inhomogeneous” in order to avoid excessive heat release rates. Gasoline compression ignition (GCI) has the potential to offer diesel-like efficiency at a lower cost and can be achieved with fuelsmore » such as low-octane straight run gasoline which require significantly less processing in the refinery compared to today’s fuels. To aid the design and optimization of a compression ignition (CI) combustion system using such fuels, a global sensitivity analysis (GSA) was conducted to understand the relative influence of various design parameters on efficiency, emissions and heat release rate. The design parameters included injection strategies, exhaust gas recirculation (EGR) fraction, temperature and pressure at intake valve closure and injector configuration. These were varied simultaneously to achieve various targets of ignition timing, combustion phasing, overall burn duration, emissions, fuel consumption, peak cylinder pressure and maximum pressure rise rate. The baseline case was a three-dimensional closed-cycle computational fluid dynamics (CFD) simulation with a sector mesh at medium load conditions. Eleven design parameters were considered and ranges of variation were prescribed to each of these. These input variables were perturbed in their respective ranges using the Monte Carlo (MC) method to generate a set of 256 CFD simulations and the targets were calculated from the simulation results. GSA was then applied as a screening tool to identify the input parameters having the most significant impact on each target. The results were further assessed by investigating the impact of individual parameter variations on the targets. Overall, it was demonstrated that GSA can be an effective tool in understanding parameters sensitive to a low temperature combustion concept with novel fuels.« less
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.
Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian
2016-01-20
This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
In vivo potency revisited - Keep the target in sight.
Gabrielsson, Johan; Peletier, Lambertus A; Hjorth, Stephan
2018-04-01
Potency is a central parameter in pharmacological and biochemical sciences, as well as in drug discovery and development endeavors. It is however typically defined in terms only of ligand to target binding affinity also in in vivo experimentation, thus in a manner analogous to in in vitro studies. As in vivo potency is in fact a conglomerate of events involving ligand, target, and target-ligand complex processes, overlooking some of the fundamental differences between in vivo and in vitro may result in serious mispredictions of in vivo efficacious dose and exposure. The analysis presented in this paper compares potency measures derived from three model situations. Model A represents the closed in vitro system, defining target binding of a ligand when total target and ligand concentrations remain static and constant. Model B describes an open in vivo system with ligand input and clearance (Cl (L) ), adding in parallel to the turnover (k syn , k deg ) of the target. Model C further adds to the open in vivo system in Model B also the elimination of the target-ligand complex (k e(RL) ) via a first-order process. We formulate corresponding equations of the equilibrium (steady-state) relationships between target and ligand, and complex and ligand for each of the three model systems and graphically illustrate the resulting simulations. These equilibrium relationships demonstrate the relative impact of target and target-ligand complex turnover, and are easier to interpret than the more commonly used ligand-, target- and complex concentration-time courses. A new potency expression, labeled L 50 , is then derived. L 50 is the ligand concentration at half-maximal target and complex concentrations and is an amalgamation of target turnover, target-ligand binding and complex elimination parameters estimated from concentration-time data. L 50 is then compared to the dissociation constant K d (target-ligand binding affinity), the conventional Black & Leff potency estimate EC 50 , and the derived Michaelis-Menten parameter K m (target-ligand binding and complex removal) across a set of literature data. It is evident from a comparison between parameters derived from in vitro vs. in vivo experiments that L 50 can be either numerically greater or smaller than the K d (or K m ) parameter, primarily depending on the ratio of k deg -to-k e(RL) . Contrasting the limit values of target R and target-ligand complex RL for ligand concentrations approaching infinity demonstrates that the outcome of the three models differs to a great extent. Based on the analysis we propose that a better understanding of in vivo pharmacological potency requires simultaneous assessment of the impact of its underlying determinants in the open system setting. We propose that L 50 will be a useful parameter guiding predictions of the effective concentration range, for translational purposes, and assessment of in vivo target occupancy/suppression by ligand, since it also encompasses target turnover - in turn also subject to influence by pathophysiology and drug treatment. Different compounds may have similar binding affinity for a target in vitro (same K d ), but vastly different potencies in vivo. L 50 points to what parameters need to be taken into account, and particularly that closed-system (in vitro) parameters should not be first choice when ranking compounds in vivo (open system). Copyright © 2017 Elsevier Inc. All rights reserved.
Assay Development Process | Office of Cancer Clinical Proteomics Research
Typical steps involved in the development of a mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, R. D.
2013-09-06
We have developed a set of modeled nuclear reaction cross sections for use in radiochemical diagnostics. Systematics for the input parameters required by the Hauser-Feshbach statistical model were developed and used to calculate neutron induced nuclear reaction cross sections for targets ranging from Terbium (Z = 65) to Rhenium (Z = 75). Of particular interest are the cross sections on Tm, Lu, and Ta including reactions on isomeric targets.
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
The bulk composition of Titan's atmosphere.
NASA Technical Reports Server (NTRS)
Trafton, L.
1972-01-01
Consideration of the physical constraints for Titan's atmosphere leads to a model which describes the bulk composition of the atmosphere in terms of observable parameters. Intermediate-resolution photometric scans of both Saturn and Titan, including scans of the Q branch of Titan's methane band, constrain these parameters in such a way that the model indicates the presence of another important atmospheric gas, namely, another bulk constituent or a significant thermal opacity. Further progress in determining the composition and state of Titan's atmosphere requires additional observations to eliminate present ambiguities. For this purpose, particular observational targets are suggested.
Drug delivery across length scales.
Delcassian, Derfogail; Patel, Asha K; Cortinas, Abel B; Langer, Robert
2018-02-20
Over the last century, there has been a dramatic change in the nature of therapeutic, biologically active molecules available to treat disease. Therapies have evolved from extracted natural products towards rationally designed biomolecules, including small molecules, engineered proteins and nucleic acids. The use of potent drugs which target specific organs, cells or biochemical pathways, necessitates new tools which can enable controlled delivery and dosing of these therapeutics to their biological targets. Here, we review the miniaturisation of drug delivery systems from the macro to nano-scale, focussing on controlled dosing and controlled targeting as two key parameters in drug delivery device design. We describe how the miniaturisation of these devices enables the move from repeated, systemic dosing, to on-demand, targeted delivery of therapeutic drugs and highlight areas of focus for the future.
The TESS Input Catalog and Selection of Targets for the TESS Transit Search
NASA Astrophysics Data System (ADS)
Pepper, Joshua; Stassun, Keivan G.; Paegert, Martin; Oelkers, Ryan; De Lee, Nathan Michael; Torres, Guillermo; TESS Target Selection Working Group
2018-01-01
The TESS mission will photometrically survey millions of the brightest stars over almost the entire the sky to detect transiting exoplanets. A key step to enable that search is the creation of the TESS Input Catalog (TIC), a compiled catalog of 700 million stars and galaxies with observed and calculated parameters. From the TIC we derive the Candidate Target List (CTL) to identify target stars for the 2-minute TESS postage stamps. The CTL is designed to identify the best stars for the detection of small planets, which includes all bright cool dwarf stars in the sky. I will describe the target selection strategy, the distribution of stars in the current CTL, and how both the TIC and CTL will expand and improve going forward.
Adaptive individual-cylinder thermal state control using piston cooling for a GDCI engine
Roth, Gregory T; Husted, Harry L; Sellnau, Mark C
2015-04-07
A system for a multi-cylinder compression ignition engine includes a plurality of nozzles, at least one nozzle per cylinder, with each nozzle configured to spray oil onto the bottom side of a piston of the engine to cool that piston. Independent control of the oil spray from the nozzles is provided on a cylinder-by-cylinder basis. A combustion parameter is determined for combustion in each cylinder of the engine, and control of the oil spray onto the piston in that cylinder is based on the value of the combustion parameter for combustion in that cylinder. A method for influencing combustion in a multi-cylinder engine, including determining a combustion parameter for combustion taking place in in a cylinder of the engine and controlling an oil spray targeted onto the bottom of a piston disposed in that cylinder is also presented.
Monte Carlo simulation study of positron generation in ultra-intense laser-solid interactions
NASA Astrophysics Data System (ADS)
Yan, Yonghong; Wu, Yuchi; Zhao, Zongqing; Teng, Jian; Yu, Jinqing; Liu, Dongxiao; Dong, Kegong; Wei, Lai; Fan, Wei; Cao, Leifeng; Yao, Zeen; Gu, Yuqiu
2012-02-01
The Monte Carlo transport code Geant4 has been used to study positron production in the transport of laser-produced hot electrons in solid targets. The dependence of the positron yield on target parameters and the hot-electron temperature has been investigated in thick targets (mm-scale), where only the Bethe-Heitler process is considered. The results show that Au is the best target material, and an optimal target thickness exists for generating abundant positrons at a given hot-electron temperature. The positron angular distributions and energy spectra for different hot electron temperatures were studied without considering the sheath field on the back of the target. The effect of the target rear sheath field for positron acceleration was studied by numerical simulation while including an electrostatic field in the Monte Carlo model. It shows that the positron energy can be enhanced and quasi-monoenergetic positrons are observed owing to the effect of the sheath field.
Feng, Yan; Mitchison, Timothy J; Bender, Andreas; Young, Daniel W; Tallarico, John A
2009-07-01
Multi-parameter phenotypic profiling of small molecules provides important insights into their mechanisms of action, as well as a systems level understanding of biological pathways and their responses to small molecule treatments. It therefore deserves more attention at an early step in the drug discovery pipeline. Here, we summarize the technologies that are currently in use for phenotypic profiling--including mRNA-, protein- and imaging-based multi-parameter profiling--in the drug discovery context. We think that an earlier integration of phenotypic profiling technologies, combined with effective experimental and in silico target identification approaches, can improve success rates of lead selection and optimization in the drug discovery process.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Interactions of galaxies outside clusters and massive groups
NASA Astrophysics Data System (ADS)
Yadav, Jaswant K.; Chen, Xuelei
2018-06-01
We investigate the dependence of physical properties of galaxies on small- and large-scale density environment. The galaxy population consists of mainly passively evolving galaxies in comparatively low-density regions of Sloan Digital Sky Survey (SDSS). We adopt (i) local density, ρ _{20}, derived using adaptive smoothing kernel, (ii) projected distance, r_p, to the nearest neighbor galaxy and (iii) the morphology of the nearest neighbor galaxy as various definitions of environment parameters of every galaxy in our sample. In order to detect long-range interaction effects, we group galaxy interactions into four cases depending on morphology of the target and neighbor galaxies. This study builds upon an earlier study by Park and Choi (2009) by including improved definitions of target and neighbor galaxies, thus enabling us to better understand the effect of "the nearest neighbor" interaction on the galaxy. We report that the impact of interaction on galaxy properties is detectable at least up to the pair separation corresponding to the virial radius of (the neighbor) galaxies. This turns out to be mostly between 210 and 360 h^{-1}kpc for galaxies included in our study. We report that early type fraction for isolated galaxies with r_p > r_{vir,nei} is almost ignorant of the background density and has a very weak density dependence for closed pairs. Star formation activity of a galaxy is found to be crucially dependent on neighbor galaxy morphology. We find star formation activity parameters and structure parameters of galaxies to be independent of the large-scale background density. We also exhibit that changing the absolute magnitude of the neighbor galaxies does not affect significantly the star formation activity of those target galaxies whose morphology and luminosities are fixed.
EDDIX--a database of ionisation double differential cross sections.
MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H
2011-02-01
The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
Target discrimination strategies in optics detection
NASA Astrophysics Data System (ADS)
Sjöqvist, Lars; Allard, Lars; Henriksson, Markus; Jonsson, Per; Pettersson, Magnus
2013-10-01
Detection and localisation of optical assemblies used for weapon guidance or sniper rifle scopes has attracted interest for security and military applications. Typically a laser system is used to interrogate a scene of interest and the retro-reflected radiation is detected. Different system approaches for area coverage can be realised ranging from flood illumination to step-and-stare or continuous scanning schemes. Independently of the chosen approach target discrimination is a crucial issue, particularly if a complex scene such as in an urban environment and autonomous operation is considered. In this work target discrimination strategies in optics detection are discussed. Typical parameters affecting the reflected laser radiation from the target are the wavelength, polarisation properties, temporal effects and the range resolution. Knowledge about the target characteristics is important to predict the target discrimination capability. Two different systems were used to investigate polarisation properties and range resolution information from targets including e.g. road signs, optical reflexes, rifle sights and optical references. The experimental results and implications on target discrimination will be discussed. If autonomous operation is required target discrimination becomes critical in order to reduce the number of false alarms.
Active colloids as mobile microelectrodes for unified label-free selective cargo transport.
Boymelgreen, Alicia M; Balli, Tov; Miloh, Touvia; Yossifon, Gilad
2018-02-22
Utilization of active colloids to transport both biological and inorganic cargo has been widely examined in the context of applications ranging from targeted drug delivery to sample analysis. In general, carriers are customized to load one specific target via a mechanism distinct from that driving the transport. Here we unify these tasks and extend loading capabilities to include on-demand selection of multiple nano/micro-sized targets without the need for pre-labelling or surface functionalization. An externally applied electric field is singularly used to drive the active cargo carrier and transform it into a mobile floating electrode that can attract (trap) or repel specific targets from its surface by dielectrophoresis, enabling dynamic control of target selection, loading and rate of transport via the electric field parameters. In the future, dynamic selectivity could be combined with directed motion to develop building blocks for bottom-up fabrication in applications such as additive manufacturing and soft robotics.
High contrast ion acceleration at intensities exceeding 10{sup 21} Wcm{sup −2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dollar, F.; Zulick, C.; Matsuoka, T.
2013-05-15
Ion acceleration from short pulse laser interactions at intensities of 2×10{sup 21}Wcm{sup −2} was studied experimentally under a wide variety of parameters, including laser contrast, incidence angle, and target thickness. Trends in maximum proton energy were observed, as well as evidence of improvement in the acceleration gradients by using dual plasma mirrors over traditional pulse cleaning techniques. Extremely high efficiency acceleration gradients were produced, accelerating both the contaminant layer and high charge state ions from the bulk of the target. Two dimensional particle-in-cell simulations enabled the study of the influence of scale length on submicron targets, where hydrodynamic expansion affectsmore » the rear surface as well as the front. Experimental evidence of larger electric fields for sharp density plasmas is observed in simulation results as well for such targets, where target ions are accelerated without the need for contaminant removal.« less
Testing Saliency Parameters for Automatic Target Recognition
NASA Technical Reports Server (NTRS)
Pandya, Sagar
2012-01-01
A bottom-up visual attention model (the saliency model) is tested to enhance the performance of Automated Target Recognition (ATR). JPL has developed an ATR system that identifies regions of interest (ROI) using a trained OT-MACH filter, and then classifies potential targets as true- or false-positives using machine-learning techniques. In this project, saliency is used as a pre-processing step to reduce the space for performing OT-MACH filtering. Saliency parameters, such as output level and orientation weight, are tuned to detect known target features. Preliminary results are promising and future work entails a rigrous and parameter-based search to gain maximum insight about this method.
The relative pose estimation of aircraft based on contour model
NASA Astrophysics Data System (ADS)
Fu, Tai; Sun, Xiangyi
2017-02-01
This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
Echigoya, Yusuke; Mouly, Vincent; Garcia, Luis; Yokota, Toshifumi; Duddy, William
2015-01-01
The use of antisense ‘splice-switching’ oligonucleotides to induce exon skipping represents a potential therapeutic approach to various human genetic diseases. It has achieved greatest maturity in exon skipping of the dystrophin transcript in Duchenne muscular dystrophy (DMD), for which several clinical trials are completed or ongoing, and a large body of data exists describing tested oligonucleotides and their efficacy. The rational design of an exon skipping oligonucleotide involves the choice of an antisense sequence, usually between 15 and 32 nucleotides, targeting the exon that is to be skipped. Although parameters describing the target site can be computationally estimated and several have been identified to correlate with efficacy, methods to predict efficacy are limited. Here, an in silico pre-screening approach is proposed, based on predictive statistical modelling. Previous DMD data were compiled together and, for each oligonucleotide, some 60 descriptors were considered. Statistical modelling approaches were applied to derive algorithms that predict exon skipping for a given target site. We confirmed (1) the binding energetics of the oligonucleotide to the RNA, and (2) the distance in bases of the target site from the splice acceptor site, as the two most predictive parameters, and we included these and several other parameters (while discounting many) into an in silico screening process, based on their capacity to predict high or low efficacy in either phosphorodiamidate morpholino oligomers (89% correctly predicted) and/or 2’O Methyl RNA oligonucleotides (76% correctly predicted). Predictions correlated strongly with in vitro testing for sixteen de novo PMO sequences targeting various positions on DMD exons 44 (R2 0.89) and 53 (R2 0.89), one of which represents a potential novel candidate for clinical trials. We provide these algorithms together with a computational tool that facilitates screening to predict exon skipping efficacy at each position of a target exon. PMID:25816009
NASA Astrophysics Data System (ADS)
Dyckmanns, Malte; Vaughan, Owen
2017-06-01
We generalise the hyper-Kähler/quaternionic Kähler (HK/QK) correspondence to include para-geometries, and present a new concise proof that the target manifold of the HK/QK correspondence is quaternionic Kähler. As an application, we construct one-parameter deformations of the temporal and Euclidean supergravity c-map metrics and show that they are para-quaternionic Kähler.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Prompt atmospheric neutrino fluxes: perturbative QCD models and nuclear effects
Bhattacharya, Atri; Enberg, Rikard; Jeong, Yu Seon; ...
2016-11-28
We evaluate the prompt atmospheric neutrino flux at high energies using three different frameworks for calculating the heavy quark production cross section in QCD: NLO perturbative QCD, k T factorization including low-x resummation, and the dipole model including parton saturation. We use QCD parameters, the value for the charm quark mass and the range for the factorization and renormalization scales that provide the best description of the total charm cross section measured at fixed target experiments, at RHIC and at LHC. Using these parameters we calculate differential cross sections for charm and bottom production and compare with the latest datamore » on forward charm meson production from LHCb at 7 TeV and at 13 TeV, finding good agreement with the data. In addition, we investigate the role of nuclear shadowing by including nuclear parton distribution functions (PDF) for the target air nucleus using two different nuclear PDF schemes. Depending on the scheme used, we find the reduction of the flux due to nuclear effects varies from 10% to 50% at the highest energies. Finally, we compare our results with the IceCube limit on the prompt neutrino flux, which is already providing valuable information about some of the QCD models.« less
Multi-objective trajectory optimization for the space exploration vehicle
NASA Astrophysics Data System (ADS)
Qin, Xiaoli; Xiao, Zhen
2016-07-01
The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Swarming behavior of gradient-responsive Brownian particles in a porous medium.
Grančič, Peter; Štěpánek, František
2012-07-01
Active targeting by Brownian particles in a fluid-filled porous environment is investigated by computer simulation. The random motion of the particles is enhanced by diffusiophoresis with respect to concentration gradients of chemical signals released by the particles in the proximity of a target. The mathematical model, based on a combination of the Brownian dynamics method and a diffusion problem is formulated in terms of key parameters that include the particle diffusiophoretic mobility and the signaling threshold (the distance from the target at which the particles release their chemical signals). The results demonstrate that even a relatively simple chemical signaling scheme can lead to a complex collective behavior of the particles and can be a very efficient way of guiding a swarm of Brownian particles towards a target, similarly to the way colonies of living cells communicate via secondary messengers.
Ji, Z; Jiang, Y L; Guo, F X; Peng, R; Sun, H T; Fan, J H; Wang, J J
2017-04-04
Objective: To compare the dose distributions of postoperative plans with preoperative plans for seeds implantations of paravertebral/retroperitoneal tumors assisted by 3D printing guide template and CT guidance, explore the effects of the technology for seeds implantations in dosimetry level and provide data support for the optimization and standardization in seeds implantation. Methods: Between December 2015 and July 2016, a total of 10 patients with paravertebral/retroperitoneal tumors (12 lesions) received 3D printing template assist radioactive seeds implantations in department of radiation oncology of Peking University Third Hospital, and included in the study. The diseases included cervical cancer, kidney cancer, abdominal stromal tumor, leiomyosarcoma of kidney, esophageal cancer and carcinoma of ureter. The prescribed doses was 110-150 Gy. All patients received preoperative planning design, individual template design and production, and the dose distribution of postoperative plan was compared with preoperative plan. Dose parameters including D(90), MPD, V(100), V(150,)conformal index(CI), EI of target volume and D(2cc) of organs at risk (spinal cord, aorta, kidney). Statistical software was SPSS 19.0 and statistical method was non-parameters Wilcoxon symbols test. Results: A total of 10 3D printing templates were designed and produced which were including 12 treatment areas.The mean D(90) of postoperative target area (GTV) was 131.1 (97.8-167.4 Gy) Gy. The actual seeds number of post operation increased by 3 to 12 in 5 cases (42.0%). The needle was well distributed. For postoperative plans, the mean D(90,)MPD, V(100,)V(150) was 131.1 Gy, 69.3 Gy, 90.2% and 65.2%, respectively, and which was 140.2 Gy, 65.6 Gy, 91.7% and 26.8%, respectively, in preoperative plans. This meant that the actual dose of target volume was slightly lower than preplanned dose, and the high dose area of target volume was larger than preplanned range, but there was no statistical difference in P value between the two groups except V(150)( P =0.004). The actual dose conformity of target volume was worse than preplanned (CI was 0.58 and 0.62, respectively) and the difference was statistically significant( P =0.019). The actual dose of external target volume was higher than preplanned (EI was 55% and 45.9%, respectively) and the difference had no significance. For organs at risk, the actual mean D(2cc) of spinal cord, aorta and kidney was 24.7, 54.4 and 29.7 Gy, respectively, which was higher than preplanned(20.6, 51.6 and 28.6 Gy, respectively), and there was no significant difference in two groups. Conclusions: Most parameters of postoperative validations for 3D printing template assisted seeds implantation in paravertebral/retroperitoneal are closed to the expectations of preoperative plans which means the improvement of accuracy in treatment.
Mechanistic and quantitative insight into cell surface targeted molecular imaging agent design.
Zhang, Liang; Bhatnagar, Sumit; Deschenes, Emily; Thurber, Greg M
2016-05-05
Molecular imaging agent design involves simultaneously optimizing multiple probe properties. While several desired characteristics are straightforward, including high affinity and low non-specific background signal, in practice there are quantitative trade-offs between these properties. These include plasma clearance, where fast clearance lowers background signal but can reduce target uptake, and binding, where high affinity compounds sometimes suffer from lower stability or increased non-specific interactions. Further complicating probe development, many of the optimal parameters vary depending on both target tissue and imaging agent properties, making empirical approaches or previous experience difficult to translate. Here, we focus on low molecular weight compounds targeting extracellular receptors, which have some of the highest contrast values for imaging agents. We use a mechanistic approach to provide a quantitative framework for weighing trade-offs between molecules. Our results show that specific target uptake is well-described by quantitative simulations for a variety of targeting agents, whereas non-specific background signal is more difficult to predict. Two in vitro experimental methods for estimating background signal in vivo are compared - non-specific cellular uptake and plasma protein binding. Together, these data provide a quantitative method to guide probe design and focus animal work for more cost-effective and time-efficient development of molecular imaging agents.
Acoustic characteristics of different target vowels during the laryngeal telescopy.
Shu, Min-Tsan; Lee, Kuo-Shen; Chang, Chin-Wen; Hsieh, Li-Chun; Yang, Cheng-Chien
2014-10-01
The aim of this study was to investigate the acoustic characteristics of target vowels phonated in normal voice persons while performing laryngeal telescopy. The acoustic characteristics are compared to show the extent of possible difference to speculate their impact on phonation function. Thirty-four male subjects aged 20-39 years with normal voice were included in this study. The target vowels were /i/ and /ɛ/. Recording of voice samples was done under natural phonation and during laryngeal telescopy. The acoustic analysis included the parameters of fundamental frequency, jitter, shimmer and noise-to-harmonic ratio. The sound of a target vowel /ɛ/ was perceived identical in more than 90% of the subjects by the examiner and speech language pathologist during the telescopy. Both /i/ and /ɛ/ sounds showed significant difference when compared with the results under natural phonation. There was no significant difference between /i/ and /ɛ/ during the telescopy. The present study showed that change in target vowels during laryngeal telescopy makes no significant difference in the acoustic characteristics. The results may lead to the speculation that the phonation mechanism was not affected significantly by different vowels during the telescopy. This study may suggest that in the principle of comfortable phonation, introduction of the target vowels /i/ and /ɛ/ is practical. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bayesian paradox in homeland security and homeland defense
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas; Wang, Wenjian
2011-06-01
In this paper we discuss a rather surprising result of Bayesian inference analysis: performance of a broad variety of sensors depends not only on a sensor system itself, but also on CONOPS parameters in such a way that even an excellent sensor system can perform poorly if absolute probabilities of a threat (target) are lower than a false alarm probability. This result, which we call Bayesian paradox, holds not only for binary sensors as discussed in the lead author's previous papers, but also for a more general class of multi-target sensors, discussed also in this paper. Examples include: ATR (automatic target recognition), luggage X-ray inspection for explosives, medical diagnostics, car engine diagnostics, judicial decisions, and many other issues.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
Hyper- and viscoelastic modeling of needle and brain tissue interaction.
Lehocky, Craig A; Yixing Shi; Riviere, Cameron N
2014-01-01
Deep needle insertion into brain is important for both diagnostic and therapeutic clinical interventions. We have developed an automated system for robotically steering flexible needles within the brain to improve targeting accuracy. In this work, we have developed a finite element needle-tissue interaction model that allows for the investigation of safe parameters for needle steering. The tissue model implemented contains both hyperelastic and viscoelastic properties to simulate the instantaneous and time-dependent responses of brain tissue. Several needle models were developed with varying parameters to study the effects of the parameters on tissue stress, strain and strain rate during needle insertion and rotation. The parameters varied include needle radius, bevel angle, bevel tip fillet radius, insertion speed, and rotation speed. The results will guide the design of safe needle tips and control systems for intracerebral needle steering.
Accuracy of parameter estimates for closely spaced optical targets using multiple detectors
NASA Astrophysics Data System (ADS)
Dunn, K. P.
1981-10-01
In order to obtain the cross-scan position of an optical target, more than one scanning detector is used. As expected, the cross-scan position estimation performance degrades when two nearby optical targets interfere with each other. Theoretical bounds on the two-dimensional parameter estimation performance for two closely spaced optical targets are found. Two particular classes of scanning detector arrays, namely, the crow's foot and the brickwall (or mosaic) patterns, are considered.
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-12-06
This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target's radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component.
Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft
NASA Astrophysics Data System (ADS)
He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng
2018-01-01
The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.
Sanchis, Yovana; Coscollà, Clara; Roca, Marta; Yusà, Vicent
2015-06-01
An analytical strategy including both the quantitative target analysis of 8 regulated primary aromatic amines (PAAs), as well as a comprehensive post-run target screening of 77 migrating substances, was developed for nylon utensils, using liquid chromatography-orbitrap-high resolution mass spectrometry (LC-HRMS) operating in full scan mode. The accurate mass data were acquired with a resolving power of 50,000 FWHM (scan speed, 2 Hz), and by alternating two acquisition events, ESI+ with and without fragmentation. The target method was validated after statistical optimization of the main ionization and fragmentation parameters. The quantitative method presented appropriate performance to be used in official monitoring with recoveries ranging from 78% to 112%, precision in terms of Relative Standard Deviation (RSD) was less than 15%, and the limits of quantification were between 2 and 2.5 µg kg(-1). For post-target screening, a customized theoretical database was built for food contact material migrants, including bisphenols, phthalates, and other amines. For identification purposes, accurate exact mass (<5 ppm) and some diagnostic ions including fragments were used. The strategy was applied to 10 real samples collected from different retailers in the Valencian Region (Spain) during 2014. Six out of eight target PAAs were detected in at least one sample in the target analysis. The most frequently detected compounds were 4,4'-methylenedianiline and aniline, with concentrations ranging from 2.4 to 19,715 µg kg(-1) and 2.5 to 283 µg kg(-1), respectively. Two phthalates were identified and confirmed in the post-run target screening analysis. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
Modeling of a cyclotron target for the production of 11C with Geant4.
Chiappiniello, Andrea; Zagni, Federico; Infantino, Angelo; Vichi, Sara; Cicoria, Gianfranco; Morigi, Maria Pia; Marengo, Mario
2018-04-12
In medical cyclotron facilities, 11C is produced according to the 14N(p,α)11C reaction and widely employed in studies of prostate and brain cancers by Positron Emission Tomography. It is known from literature [1] that the 11C-target assembly shows a reduction in efficiency during time, meaning a decrease of activity produced at the end of bombardment. This effect might depend on aspects still not completely known. Possible causes of the loss of performance of the 11C-target assembly were addressed by Monte Carlo simulations. Geant4 was used to model the 11C-target assembly of a GE PETtrace cyclotron. The physical and transport parameters to be used in the energy range of medical applications were extracted from literature data and 11C routine productions. The Monte Carlo assessment of 11C saturation yield was performed varying several parameters such as the proton energy and the angle of the target assembly with respect to the proton beam. The estimated 11C saturation yield is in agreement with IAEA data at the energy of interest, while is about the 35% greater than experimental value. A more comprehensive modeling of the target system, including thermodynamic effect, is required. The energy absorbed in the inner layer of the target chamber was up to 46.5 J/mm2 under typical irradiation conditions. This study shows that Geant4 is potentially a useful tool to design and optimize targetry for PET radionuclide productions. Tests to choose the Geant4 physics libraries should be performed before using this tool with different energies and materials. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
VizieR Online Data Catalog: Effective collision strengths of Si VII (Sossah+, 2014)
NASA Astrophysics Data System (ADS)
Sossah, A. M.; Tayal, S. S.
2017-08-01
The purpose of present work is to calculate more accurate data for Si VII by using highly accurate target descriptions and by including a sufficient number of target states in the close-coupling expansion. We also included fine-structure effects in the close-coupling expansions to account for the relativistic effects. We used the B-spline Breit-Pauli R-matrix (BSR) codes (Zatsarinny 2006CoPhC.174..273Z) in our scattering calculations. The present method utilizes the term-dependent non-orthogonal orbital sets for the description of the target wave functions and scattering functions. The collisional and radiative parameters have been calculated for all forbidden and allowed transitions between the lowest 92 LSJ levels of 2s22p4, 2s2p5, 2p6, 2s22p33s, 2s22p33p, 2s22p33d, and 2s2p43s configurations of Si VII. (3 data files).
Development of MRM-based assays for the absolute quantitation of plasma proteins.
Kuzyk, Michael A; Parker, Carol E; Domanski, Dominik; Borchers, Christoph H
2013-01-01
Multiple reaction monitoring (MRM), sometimes called selected reaction monitoring (SRM), is a directed tandem mass spectrometric technique performed on to triple quadrupole mass spectrometers. MRM assays can be used to sensitively and specifically quantify proteins based on peptides that are specific to the target protein. Stable-isotope-labeled standard peptide analogues (SIS peptides) of target peptides are added to enzymatic digests of samples, and quantified along with the native peptides during MRM analysis. Monitoring of the intact peptide and a collision-induced fragment of this peptide (an ion pair) can be used to provide information on the absolute peptide concentration of the peptide in the sample and, by inference, the concentration of the intact protein. This technique provides high specificity by selecting for biophysical parameters that are unique to the target peptides: (1) the molecular weight of the peptide, (2) the generation of a specific fragment from the peptide, and (3) the HPLC retention time during LC/MRM-MS analysis. MRM is a highly sensitive technique that has been shown to be capable of detecting attomole levels of target peptides in complex samples such as tryptic digests of human plasma. This chapter provides a detailed description of how to develop and use an MRM protein assay. It includes sections on the critical "first step" of selecting the target peptides, as well as optimization of MRM acquisition parameters for maximum sensitivity of the ion pairs that will be used in the final method, and characterization of the final MRM assay.
Polarimetric subspace target detector for SAR data based on the Huynen dihedral model
NASA Astrophysics Data System (ADS)
Larson, Victor J.; Novak, Leslie M.
1995-06-01
Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.
Vasilijević, Zorana; Dimković, Nada; Lazarević, Katarina; Burmazović, Snežana; Krstić, Nebojša; Milanović, Sladjan; Zorić, Svetlana; Micić, Dragan
2013-01-01
Losartan, the angiotensin type 1 receptor blocker (ARB) exercises its main antihypertensive effect by vasodilatation of peripheral arteries. The aim of this study was to evaluate the antihypertensive effect and safety of losartan in patients with mild and moderate arterial hypertension (AH). This was an open post-marketing study with losartan as monotherapy in previously treated or untreated patients with AH. Primary efficacy parameter was the percentage of patients that achieved target blood pressure after 8-week treatment with a single daily dose of losartan of 50-100 mg. Safety parameters were assessed according to the percentage of adverse events and metabolic effects of therapy. The study included 550 patients with AH (59% female and 41% male), mean age 56.8 +/-11.4 years, BMI = 27 +/- 4 kg/m2. Losartan was applied in 31% of untreated and 69% of previously treatment-resistant patients After 8 weeks target blood pressure was achieved in 67.8% (SBP) and in 81.1% (DBP) of patients, respectively. The mean decrease was 21.8% for SBP and 21.1% for DBP (p < 0.001). Out of all, 65% of patients achieved both target SBP and DBP values. Hydrochlorothiazide was added to the therapy in 11.6% of patients. There were no significant differences in drug efficacy between the entire group and subgroups of patients with diabetes mellitus and impaired renal function (p = ns). Adverse events were rare and metabolic effect was favorable. Monotherapy with losartan in a dosage of 50-100 mg applied during 8 weeks resulted in achieving target values of blood pressure in 65% of patient with mild and moderate hypertension, also including the patients with diabetes mellitus and impaired renal function. Losartan is a safe and metabolically neutral medication.
Kumar, Bhowmik Salil; Lee, Young-Joo; Yi, Hong Jae; Chung, Bong Chul; Jung, Byung Hwa
2010-02-19
In order to develop a safety biomarker for atorvastatin, this drug was orally administrated to hyperlipidemic rats, and a metabolomic study was performed. Atorvastatin was given in doses of either 70 mg kg(-1) day(-1) or 250 mg kg(-1) day(-1) for a period of 7 days (n=4 for each group). To evaluate any abnormal effects of the drug, physiological and plasma biochemical parameters were measured and histopathological tests were carried out. Safety biomarkers were derived by comparing these parameters and using both global and targeted metabolic profiling. Global metabolic profiling was performed using liquid chromatography/time of flight/mass spectrometry (LC/TOF/MS) with multivariate data analysis. Several safety biomarker candidates that included various steroids and amino acids were discovered as a result of global metabolic profiling, and they were also confirmed by targeted metabolic profiling using gas chromatography/mass spectrometry (GC/MS) and capillary electrophoresis/mass spectrometry (CE/MS). Serum biochemical and histopathological tests were used to detect abnormal drug reactions in the liver after repeating oral administration of atorvastatin. The metabolic differences between control and the drug-treated groups were compared using PLS-DA score plots. These results were compared with the physiological and plasma biochemical parameters and the results of a histopathological test. Estrone, cortisone, proline, cystine, 3-ureidopropionic acid and histidine were proposed as potential safety biomarkers related with the liver toxicity of atorvastatin. These results indicate that the combined application of global and targeted metabolic profiling could be a useful tool for the discovery of drug safety biomarkers. Copyright 2009 Elsevier B.V. All rights reserved.
Recognizing visual focus of attention from head pose in natural meetings.
Ba, Sileye O; Odobez, Jean-Marc
2009-02-01
We address the problem of recognizing the visual focus of attention (VFOA) of meeting participants based on their head pose. To this end, the head pose observations are modeled using a Gaussian mixture model (GMM) or a hidden Markov model (HMM) whose hidden states correspond to the VFOA. The novelties of this paper are threefold. First, contrary to previous studies on the topic, in our setup, the potential VFOA of a person is not restricted to other participants only. It includes environmental targets as well (a table and a projection screen), which increases the complexity of the task, with more VFOA targets spread in the pan as well as tilt gaze space. Second, we propose a geometric model to set the GMM or HMM parameters by exploiting results from cognitive science on saccadic eye motion, which allows the prediction of the head pose given a gaze target. Third, an unsupervised parameter adaptation step not using any labeled data is proposed, which accounts for the specific gazing behavior of each participant. Using a publicly available corpus of eight meetings featuring four persons, we analyze the above methods by evaluating, through objective performance measures, the recognition of the VFOA from head pose information obtained either using a magnetic sensor device or a vision-based tracking system. The results clearly show that in such complex but realistic situations, the VFOA recognition performance is highly dependent on how well the visual targets are separated for a given meeting participant. In addition, the results show that the use of a geometric model with unsupervised adaptation achieves better results than the use of training data to set the HMM parameters.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
NASA Astrophysics Data System (ADS)
TayyebTaher, M.; Esmaeilzadeh, S. Majid
2017-07-01
This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
Measuring Rates of Herbicide Metabolism in Dicot Weeds with an Excised Leaf Assay.
Ma, Rong; Skelton, Joshua J; Riechers, Dean E
2015-09-07
In order to isolate and accurately determine rates of herbicide metabolism in an obligate-outcrossing dicot weed, waterhemp (Amaranthus tuberculatus), we developed an excised leaf assay combined with a vegetative cloning strategy to normalize herbicide uptake and remove translocation as contributing factors in herbicide-resistant (R) and -sensitive (S) waterhemp populations. Biokinetic analyses of organic pesticides in plants typically include the determination of uptake, translocation (delivery to the target site), metabolic fate, and interactions with the target site. Herbicide metabolism is an important parameter to measure in herbicide-resistant weeds and herbicide-tolerant crops, and is typically accomplished with whole-plant tests using radiolabeled herbicides. However, one difficulty with interpreting biokinetic parameters derived from whole-plant methods is that translocation is often affected by rates of herbicide metabolism, since polar metabolites are usually not mobile within the plant following herbicide detoxification reactions. Advantages of the protocol described in this manuscript include reproducible, accurate, and rapid determination of herbicide degradation rates in R and S populations, a substantial decrease in the amount of radiolabeled herbicide consumed, a large reduction in radiolabeled plant materials requiring further handling and disposal, and the ability to perform radiolabeled herbicide experiments in the lab or growth chamber instead of a greenhouse. As herbicide resistance continues to develop and spread in dicot weed populations worldwide, the excised leaf assay method developed and described herein will provide an invaluable technique for investigating non-target site-based resistance due to enhanced rates of herbicide metabolism and detoxification.
Revised Stellar Properties of Kepler Targets for the Q1-17 (DR25) Transit Detection Run
NASA Astrophysics Data System (ADS)
Mathur, Savita; Huber, Daniel; Batalha, Natalie M.; Ciardi, David R.; Bastien, Fabienne A.; Bieryla, Allyson; Buchhave, Lars A.; Cochran, William D.; Endl, Michael; Esquerdo, Gilbert A.; Furlan, Elise; Howard, Andrew; Howell, Steve B.; Isaacson, Howard; Latham, David W.; MacQueen, Phillip J.; Silva, David R.
2017-04-01
The determination of exoplanet properties and occurrence rates using Kepler data critically depends on our knowledge of the fundamental properties (such as temperature, radius, and mass) of the observed stars. We present revised stellar properties for 197,096 Kepler targets observed between Quarters 1–17 (Q1-17), which were used for the final transiting planet search run by the Kepler Mission (Data Release 25, DR25). Similar to the Q1–16 catalog by Huber et al., the classifications are based on conditioning published atmospheric parameters on a grid of Dartmouth isochrones, with significant improvements in the adopted method and over 29,000 new sources for temperatures, surface gravities, or metallicities. In addition to fundamental stellar properties, the new catalog also includes distances and extinctions, and we provide posterior samples for each stellar parameter of each star. Typical uncertainties are ∼27% in radius, ∼17% in mass, and ∼51% in density, which is somewhat smaller than previous catalogs because of the larger number of improved {log}g constraints and the inclusion of isochrone weighting when deriving stellar posterior distributions. On average, the catalog includes a significantly larger number of evolved solar-type stars, with an increase of 43.5% in the number of subgiants. We discuss the overall changes of radii and masses of Kepler targets as a function of spectral type, with a particular focus on exoplanet host stars.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-12-09
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-01-01
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705
NASA Astrophysics Data System (ADS)
Chakrabarti, Brato; Hanna, James
2014-11-01
Dynamical equilibria of towed cables and sedimenting filaments have been the targets of much numerical work; here, we provide analytical expressions for the configurations of a translating and axially moving string subjected to a uniform body force and local, linear, anisotropic drag forces. Generically, these configurations comprise a five-parameter family of planar shapes determined by the ratio of tangential (axial) and normal drag coefficients, the angle between the translational velocity and the body force, the relative magnitudes of translational and axial drag forces with respect to the body force, and a scaling parameter. This five-parameter family of shapes is, in fact, a degenerate six-parameter family of equilibria in which inertial forces rescale the tension in the string without affecting its shape. Each configuration is represented by a first order dynamical system for the tangential angle of the body. Limiting cases include the dynamic catenaries with or without drag, and purely sedimenting or towed strings.
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-01-01
This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target’s radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component. PMID:27929433
Varying stopping and self-focusing of intense proton beams as they heat solid density matter
NASA Astrophysics Data System (ADS)
Kim, J.; McGuffey, C.; Qiao, B.; Wei, M. S.; Grabowski, P. E.; Beg, F. N.
2016-04-01
Transport of intense proton beams in solid-density matter is numerically investigated using an implicit hybrid particle-in-cell code. Both collective effects and stopping for individual beam particles are included through the electromagnetic fields solver and stopping power calculations utilizing the varying local target conditions, allowing self-consistent transport studies. Two target heating mechanisms, the beam energy deposition and Ohmic heating driven by the return current, are compared. The dependences of proton beam transport in solid targets on the beam parameters are systematically analyzed, i.e., simulations with various beam intensities, pulse durations, kinetic energies, and energy distributions are compared. The proton beam deposition profile and ultimate target temperature show strong dependence on intensity and pulse duration. A strong magnetic field is generated from a proton beam with high density and tight beam radius, resulting in focusing of the beam and localized heating of the target up to hundreds of eV.
Varying stopping and self-focusing of intense proton beams as they heat solid density matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; McGuffey, C., E-mail: cmcguffey@ucsd.edu; Qiao, B.
2016-04-15
Transport of intense proton beams in solid-density matter is numerically investigated using an implicit hybrid particle-in-cell code. Both collective effects and stopping for individual beam particles are included through the electromagnetic fields solver and stopping power calculations utilizing the varying local target conditions, allowing self-consistent transport studies. Two target heating mechanisms, the beam energy deposition and Ohmic heating driven by the return current, are compared. The dependences of proton beam transport in solid targets on the beam parameters are systematically analyzed, i.e., simulations with various beam intensities, pulse durations, kinetic energies, and energy distributions are compared. The proton beam depositionmore » profile and ultimate target temperature show strong dependence on intensity and pulse duration. A strong magnetic field is generated from a proton beam with high density and tight beam radius, resulting in focusing of the beam and localized heating of the target up to hundreds of eV.« less
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
Characterization of 12 GnRH peptide agonists - a kinetic perspective.
Nederpelt, Indira; Georgi, Victoria; Schiele, Felix; Nowak-Reppel, Katrin; Fernández-Montalván, Amaury E; IJzerman, Adriaan P; Heitman, Laura H
2016-01-01
Drug-target residence time is an important, yet often overlooked, parameter in drug discovery. Multiple studies have proposed an increased residence time to be beneficial for improved drug efficacy and/or longer duration of action. Currently, there are many drugs on the market targeting the gonadotropin-releasing hormone (GnRH) receptor for the treatment of hormone-dependent diseases. Surprisingly, the kinetic receptor-binding parameters of these analogues have not yet been reported. Therefore, this project focused on determining the receptor-binding kinetics of 12 GnRH peptide agonists, including many marketed drugs. A novel radioligand-binding competition association assay was developed and optimized for the human GnRH receptor with the use of a radiolabelled peptide agonist, [(125) I]-triptorelin. In addition to radioligand-binding studies, a homogeneous time-resolved FRET Tag-lite™ method was developed as an alternative assay for the same purpose. Two novel competition association assays were successfully developed and applied to determine the kinetic receptor-binding characteristics of 12 high-affinity GnRH peptide agonists. Results obtained from both methods were highly correlated. Interestingly, the binding kinetics of the peptide agonists were more divergent than their affinities with residence times ranging from 5.6 min (goserelin) to 125 min (deslorelin). Our research provides new insights by incorporating kinetic, next to equilibrium, binding parameters in current research and development that can potentially improve future drug discovery targeting the GnRH receptor. © 2015 The British Pharmacological Society.
Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study
Wang, Luyao; Wang, Wenhui; Yan, Tianyi; Song, Jiayong; Yang, Weiping; Wang, Bin; Go, Ritsu; Huang, Qiang; Wu, Jinglong
2017-01-01
Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60–78 years) and 12 young (22–28 years) male adults. For non-target stimuli, we focused on alpha (8–13 Hz), beta (13–30 Hz), and gamma (30–50 Hz) bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging. PMID:28824411
Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study.
Wang, Luyao; Wang, Wenhui; Yan, Tianyi; Song, Jiayong; Yang, Weiping; Wang, Bin; Go, Ritsu; Huang, Qiang; Wu, Jinglong
2017-01-01
Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60-78 years) and 12 young (22-28 years) male adults. For non-target stimuli, we focused on alpha (8-13 Hz), beta (13-30 Hz), and gamma (30-50 Hz) bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging.
Characterization of 12 GnRH peptide agonists – a kinetic perspective
Nederpelt, Indira; Georgi, Victoria; Schiele, Felix; Nowak‐Reppel, Katrin; Fernández‐Montalván, Amaury E.; IJzerman, Adriaan P.
2015-01-01
Background and Purpose Drug‐target residence time is an important, yet often overlooked, parameter in drug discovery. Multiple studies have proposed an increased residence time to be beneficial for improved drug efficacy and/or longer duration of action. Currently, there are many drugs on the market targeting the gonadotropin‐releasing hormone (GnRH) receptor for the treatment of hormone‐dependent diseases. Surprisingly, the kinetic receptor‐binding parameters of these analogues have not yet been reported. Therefore, this project focused on determining the receptor‐binding kinetics of 12 GnRH peptide agonists, including many marketed drugs. Experimental Approach A novel radioligand‐binding competition association assay was developed and optimized for the human GnRH receptor with the use of a radiolabelled peptide agonist, [125I]‐triptorelin. In addition to radioligand‐binding studies, a homogeneous time‐resolved FRET Tag‐lite™ method was developed as an alternative assay for the same purpose. Key Results Two novel competition association assays were successfully developed and applied to determine the kinetic receptor‐binding characteristics of 12 high‐affinity GnRH peptide agonists. Results obtained from both methods were highly correlated. Interestingly, the binding kinetics of the peptide agonists were more divergent than their affinities with residence times ranging from 5.6 min (goserelin) to 125 min (deslorelin). Conclusions and Implications Our research provides new insights by incorporating kinetic, next to equilibrium, binding parameters in current research and development that can potentially improve future drug discovery targeting the GnRH receptor. PMID:26398856
NASA Astrophysics Data System (ADS)
Schweitzer, Susanne; Nemitz, Wolfgang; Sommer, Christian; Hartmann, Paul; Fulmek, Paul; Nicolics, Johann; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.
2014-09-01
For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for color temperature reproducibility under control. In this regard, it is imperative to understand how compositional, optical and materials properties of the color conversion element (CCE), which typically consists of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired color temperature of a white LED source. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board (PCB) by chip-on-board technology and a CCE with a glob-top configuration as a model system and discuss the impact of potential sources for color temperature deviation among individual devices. Parameters that are investigated include imprecisions in the amount of materials deposition, deviations from the target value for the phosphor concentration in the matrix material, deviations from the target value for the particle sizes of the phosphor material, deviations from the target values for the refractive indexes of phosphor and matrix material as well as deviations from the reflectivity of the substrate surface. From these studies, some general conclusions can be drawn which of these parameters have the largest impact on color deviation and have to be controlled most precisely in a fabrication process in regard of color temperature reproducibility among individual white LED sources.
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Yang, C; Paulson, E; Li, X
2012-06-01
To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Howell, Ellen S.; Fernandez, Yan; Harker, David E.; Ryan, Erin; Lovell, Amy; Woodward, Charles E.; Benner, Lance A.
2015-01-01
Parameters important for NEO risk assessment and mitigation include Near-Earth Object diameter and taxonomic classification, which translates to surface composition. Diameters of NEOs are derived from the thermal fluxes measured by WISE, NEOWISE, Spitzer Warm Mission and ground-based telescopes including the IRTF and UKIRT. Diameter and its coupled parameters Albedo and IR beaming parameter (a proxy for thermal inertia and/or surface roughness) are dependent upon the phase angle, which is the Sun-target-observer angle. Orbit geometries of NEOs, however, typically provide for observations at phase angles greater than 20 degrees. At higher phase angles, the observed thermal emission is sampling both the day and night sides of the NEO. We compare thermal models for NEOs that exclude (NEATM) and include (NESTM) night-side emission. We present a case study of NEO 3691 Bede, which is a higher albedo object, X (Ec) or Cgh taxonomy, to highlight the range of H magnitudes for this object (depending on the albedo and phase function slope parameter G), and to examine at different phase angles the taxonomy and thermal model fits for this NEO. Observations of 3691 Bede include our observations with IRTF+SpeX and with the 10 micrometer UKIRT+Michelle instrument, as well as WISE and Spitzer Warm mission data. By examining 3691 Bede as a case study, we highlight the interplay between the derivation of basic physical parameters and observing geometry, and we discuss the uncertainties in H magnitude, taxonomy assignment amongst the X-class (P, M, E), and diameter determinations. Systematic dependencies in the derivation of basic characterization parameters of H-magnitude, diameter, albedo and taxonomy with observing geometry are important to understand. These basic characterization parameters affect the statistical assessments of the NEO population, which in turn, affects the assignment of statistically-assessed basic parameters to discovered but yet-to-be-fully-characterized NEOs.
NASA Astrophysics Data System (ADS)
Rahmani, Faezeh; Shahriari, Majid; Minoochehr, Abdolhamid; Nedaie, Hasan
2011-06-01
A hybrid photoneutron target including natural uranium has been studied for a 20 MeV linear electron accelerator (Linac) based Boron Neutron Capture Therapy (BNCT) facility. In this study the possibility of using uranium to increase the neutron intensity has been investigated by focusing on the time dependence behavior of the build-up and decay of the delayed gamma rays from fission fragments and activation products through photo-fission reactions in the BSA (Beam Shaping Assembly) configuration design. Delayed components of neutrons and photons were calculated. The obtained BSA parameters are in agreement with the IAEA recommendation and compared to the hybrid photoneutron target without U. The epithermal flux in the suggested design is 2.67E9 (n/cm 2s/mA).
Atmospheric effects in multispectral remote sensor data
NASA Technical Reports Server (NTRS)
Turner, R. E.
1975-01-01
The problem of radiometric variations in multispectral remote sensing data which occur as a result of a change in geometric and environmental factors is studied. The case of spatially varying atmospheres is considered and the effect of atmospheric scattering is analyzed for realistic conditions. Emphasis is placed upon a simulation of LANDSAT spectral data for agricultural investigations over the United States. The effect of the target-background interaction is thoroughly analyzed in terms of various atmospheric states, geometric parameters, and target-background materials. Results clearly demonstrate that variable atmospheres can alter the classification accuracy and that the presence of various backgrounds can change the effective target radiance by a significant amount. A failure to include these effects in multispectral data analysis will result in a decrease in the classification accuracy.
NASA Astrophysics Data System (ADS)
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua
2017-03-01
As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.
[Development of an analyzing system for soil parameters based on NIR spectroscopy].
Zheng, Li-Hua; Li, Min-Zan; Sun, Hong
2009-10-01
A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Optimized ion acceleration using high repetition rate, variable thickness liquid crystal targets
NASA Astrophysics Data System (ADS)
Poole, Patrick; Willis, Christopher; Cochran, Ginevra; Andereck, C. David; Schumacher, Douglass
2015-11-01
Laser-based ion acceleration is a widely studied plasma physics topic for its applications to secondary radiation sources, advanced imaging, and cancer therapy. Recent work has centered on investigating new acceleration mechanisms that promise improved ion energy and spectrum. While the physics of these mechanisms is not yet fully understood, it has been observed to dominate for certain ranges of target thickness, where the optimum thickness depends on laser conditions including energy, pulse width, and contrast. The study of these phenomena is uniquely facilitated by the use of variable-thickness liquid crystal films, first introduced in P. L. Poole et al. PoP21, 063109 (2014). Control of the formation parameters of these freely suspended films such as volume, temperature, and draw speed allows on-demand thickness variability between 10 nanometers and several 10s of microns, fully encompassing the currently studied thickness regimes with a single target material. The low vapor pressure of liquid crystal enables in-situ film formation and unlimited vacuum use of these targets. Details on the selection and optimization of ion acceleration mechanism with target thickness will be presented, including recent experiments on the Scarlet laser facility and others. This work was performed with support from the DARPA PULSE program through a grant from AMRDEC and by the NNSA under contract DE-NA0001976.
Effects of clouds on the Earth radiation budget; Seasonal and inter-annual patterns
NASA Technical Reports Server (NTRS)
Dhuria, Harbans L.
1992-01-01
Seasonal and regional variations of clouds and their effects on the climatological parameters were studied. The climatological parameters surface temperature, solar insulation, short-wave absorbed, long wave emitted, and net radiation were considered. The data of climatological parameters consisted of about 20 parameters of Earth radiation budget and clouds of 2070 target areas which covered the globe. It consisted of daily and monthly averages of each parameter for each target area for the period, Jun. 1979 - May 1980. Cloud forcing and black body temperature at the top of the atmosphere were calculated. Interactions of clouds, cloud forcing, black body temperature, and the climatological parameters were investigated and analyzed.
Research on filter’s parameter selection based on PROMETHEE method
NASA Astrophysics Data System (ADS)
Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan
2018-03-01
The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.
NASA Astrophysics Data System (ADS)
Tamás, Gáti; Katalin, Tefner Ildikó; Lajos, Kovács; Katalin, Hodosi; Tamás, Bender
2018-01-01
The aim of this study was to investigate the effects of balneotherapy on chronic low back pain. This is a minimized, follow-up study evaluated according to the analysis of intention to treat. The subjects included in the study were 105 patients suffering from chronic low back pain. The control group (n = 53) received the traditional musculoskeletal pain killer treatment, while the target group (n = 52) attended thermal mineral water treatment for 3 weeks for 15 occasions on top of the usual musculoskeletal pain killer treatment. The following parameters were measured before, right after, and 9 weeks after the 3-week therapy: the level of low back pain in rest and the level during activity are tested using the Visual Analog Scale (VAS); specific questionnaire on the back pain (Oswestry); and a questionnaire on quality of life (EuroQual-5D). All of the investigated parameters improved significantly (p < 0.001) in the target group by the end of the treatment compared to the base period, and this improvement was persistent during the follow-up period. There were no significant changes in the measured parameters in the control group. Based on our results, balneotherapy might have favorable impact on the clinical parameters and quality of life of patients suffering from chronic low back pain.
Quadrant III RFI draft report: Appendix B-I, Volume 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-01
In order to determine the nature and extent of contamination at a RCRA site it is often necessary to investigate and characterize the chemical composition of the medium in question that represents background conditions. Background is defined as current conditions present at a site which are unaffected by past treatment, storage, or disposal of hazardous waste (OEPA, 1991). The background composition of soils at the Portsmouth Gaseous Diffusion Plant (PORTS) site was characterized for the purpose of comparing investigative soil data to a background standard for each metal on the Target Compound List/Target Analyte List and each radiological parameter ofmore » concern in this RFI. Characterization of background compositions with respect to organic parameters was not performed because the organic parameters in the TCL/TAL are not naturally occurring at the site and because the site is not located in a highly industrialized area nor downgradient from another unrelated hazardous waste site. Characterization of the background soil composition with respect to metals and radiological parameters was performed by collecting and analyzing soil boring and hand-auger samples in areas deemed unaffected by past treatment, storage, or disposal of hazardous waste. Criteria used in determining whether a soil sample location would be representative of the true background condition included: environmental history of the location, relation to Solid Waste Management Units (SWMU`s), prevailing wind direction, surface runoff direction, and ground-water flow direction.« less
Quadrant III RFI draft report: Appendix B-I, Volume 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-01
In order to determine the nature and extent of contamination at a RCRA site it is often necessary to investigate and characterize the chemical composition of the medium in question that represents background conditions. Background is defined as current conditions present at a site which are unaffected by past treatment, storage, or disposal of hazardous waste (OEPA, 1991). The background composition of soils at the Portsmouth Gaseous Diffusion Plant (PORTS) site was characterized for the purpose of comparing investigative soil data to a background standard for each metal on the Target Compound List/Target Analyte List and each radiological parameter ofmore » concern in this RFI. Characterization of background compositions with respect to organic parameters was not performed because the organic parameters in the TCL/TAL are not naturally occurring at the site and because the site is not located in a highly industrialized area nor downgradient from another unrelated hazardous waste site. Characterization of the background soil composition with respect to metals and radiological parameters was performed by collecting and analyzing soil boring and hand-auger samples in areas deemed unaffected by past treatment, storage, or disposal of hazardous waste. Criteria used in determining whether a soil sample location would be representative of the true background condition included: environmental history of the location, relation to Solid Waste Management Units (SWMU's), prevailing wind direction, surface runoff direction, and ground-water flow direction.« less
NASA Astrophysics Data System (ADS)
Gáti, Tamás; Tefner, Ildikó Katalin; Kovács, Lajos; Hodosi, Katalin; Bender, Tamás
2018-05-01
The aim of this study was to investigate the effects of balneotherapy on chronic low back pain. This is a minimized, follow-up study evaluated according to the analysis of intention to treat. The subjects included in the study were 105 patients suffering from chronic low back pain. The control group ( n = 53) received the traditional musculoskeletal pain killer treatment, while the target group ( n = 52) attended thermal mineral water treatment for 3 weeks for 15 occasions on top of the usual musculoskeletal pain killer treatment. The following parameters were measured before, right after, and 9 weeks after the 3-week therapy: the level of low back pain in rest and the level during activity are tested using the Visual Analog Scale (VAS); specific questionnaire on the back pain (Oswestry); and a questionnaire on quality of life (EuroQual-5D). All of the investigated parameters improved significantly ( p < 0.001) in the target group by the end of the treatment compared to the base period, and this improvement was persistent during the follow-up period. There were no significant changes in the measured parameters in the control group. Based on our results, balneotherapy might have favorable impact on the clinical parameters and quality of life of patients suffering from chronic low back pain.
Chen, Yaqi; Chen, Zhui; Wang, Yi
2015-01-01
Screening and identifying active compounds from traditional Chinese medicine (TCM) and other natural products plays an important role in drug discovery. Here, we describe a magnetic beads-based multi-target affinity selection-mass spectrometry approach for screening bioactive compounds from natural products. Key steps and parameters including activation of magnetic beads, enzyme/protein immobilization, characterization of functional magnetic beads, screening and identifying active compounds from a complex mixture by LC/MS, are illustrated. The proposed approach is rapid and efficient in screening and identification of bioactive compounds from complex natural products.
Bayesian performance metrics of binary sensors in homeland security applications
NASA Astrophysics Data System (ADS)
Jannson, Tomasz P.; Forrester, Thomas C.
2008-04-01
Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.
Roll tracking effects of G-vector tilt and various types of motion washout
NASA Technical Reports Server (NTRS)
Jex, H. R.; Magdaleno, R. E.; Junker, A. M.
1978-01-01
In a dogfight scenario, the task was to follow the target's roll angle while suppressing gust disturbances. All subjects adopted the same behavioral strategies in following the target while suppressing the gusts, and the MFP-fitted math model response was generally within one data symbol width. The results include the following: (1) comparisons of full roll motion (both with and without the spurious gravity tilt cue) with the static case. These motion cues help suppress disturbances with little net effect on the visual performance. Tilt cues were clearly used by the pilots but gave only small improvement in tracking errors. (2) The optimum washout (in terms of performance close to real world, similar behavioral parameters, significant motion attenuation (60 percent), and acceptable motion fidelity) was the combined attenuation and first-order washout. (3) Various trends in parameters across the motion conditions were apparent, and are discussed with respect to a comprehensive model for predicting adaptation to various roll motion cues.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Drug-Target Kinetics in Drug Discovery.
Tonge, Peter J
2018-01-17
The development of therapies for the treatment of neurological cancer faces a number of major challenges including the synthesis of small molecule agents that can penetrate the blood-brain barrier (BBB). Given the likelihood that in many cases drug exposure will be lower in the CNS than in systemic circulation, it follows that strategies should be employed that can sustain target engagement at low drug concentration. Time dependent target occupancy is a function of both the drug and target concentration as well as the thermodynamic and kinetic parameters that describe the binding reaction coordinate, and sustained target occupancy can be achieved through structural modifications that increase target (re)binding and/or that decrease the rate of drug dissociation. The discovery and deployment of compounds with optimized kinetic effects requires information on the structure-kinetic relationships that modulate the kinetics of binding, and the molecular factors that control the translation of drug-target kinetics to time-dependent drug activity in the disease state. This Review first introduces the potential benefits of drug-target kinetics, such as the ability to delineate both thermodynamic and kinetic selectivity, and then describes factors, such as target vulnerability, that impact the utility of kinetic selectivity. The Review concludes with a description of a mechanistic PK/PD model that integrates drug-target kinetics into predictions of drug activity.
Drug–Target Kinetics in Drug Discovery
2017-01-01
The development of therapies for the treatment of neurological cancer faces a number of major challenges including the synthesis of small molecule agents that can penetrate the blood-brain barrier (BBB). Given the likelihood that in many cases drug exposure will be lower in the CNS than in systemic circulation, it follows that strategies should be employed that can sustain target engagement at low drug concentration. Time dependent target occupancy is a function of both the drug and target concentration as well as the thermodynamic and kinetic parameters that describe the binding reaction coordinate, and sustained target occupancy can be achieved through structural modifications that increase target (re)binding and/or that decrease the rate of drug dissociation. The discovery and deployment of compounds with optimized kinetic effects requires information on the structure–kinetic relationships that modulate the kinetics of binding, and the molecular factors that control the translation of drug–target kinetics to time-dependent drug activity in the disease state. This Review first introduces the potential benefits of drug-target kinetics, such as the ability to delineate both thermodynamic and kinetic selectivity, and then describes factors, such as target vulnerability, that impact the utility of kinetic selectivity. The Review concludes with a description of a mechanistic PK/PD model that integrates drug–target kinetics into predictions of drug activity. PMID:28640596
Initiated chemical vapor deposited nanoadhesive for bonding National Ignition Facility's targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Tom
Currently, the target fabrication scientists in National Ignition Facility Directorate at Lawrence Livermore National Laboratory (LLNL) is studying the propagation force resulted from laser impulses impacting a target. To best study this, they would like the adhesive used to glue the target substrates to be as thin as possible. The main objective of this research project is to create adhesive glue bonds for NIF’s targets that are ≤ 1 μm thick. Polyglycidylmethacrylate (PGMA) thin films were coated on various substrates using initiated chemical vapor deposition (iCVD). Film quality studies using white light interferometry reveal that the iCVD PGMA films weremore » smooth. The coated substrates were bonded at 150 °C under vacuum, with low inflow of Nitrogen. Success in bonding most of NIF’s mock targets at thicknesses ≤ 1 μm indicates that our process is feasible in bonding the real targets. Key parameters that are required for successful bonding were concluded from the bonding results. They include inert bonding atmosphere, sufficient contact between the PGMA films, and smooth substrates. Average bond strength of 0.60 MPa was obtained from mechanical shearing tests. The bonding failure mode of the sheared interfaces was observed to be cohesive. Future work on this project will include reattempt to bond silica aerogel to iCVD PGMA coated substrates, stabilize carbon nanotube forests with iCVD PGMA coating, and kinetics study of PGMA thermal crosslinking.« less
EUV laser produced and induced plasmas for nanolithography
NASA Astrophysics Data System (ADS)
Sizyuk, Tatyana; Hassanein, Ahmed
2017-10-01
EUV produced plasma sources are being extensively studied for the development of new technology for computer chips production. Challenging tasks include optimization of EUV source efficiency, producing powerful source in 2 percentage bandwidth around 13.5 nm for high volume manufacture (HVM), and increasing the lifetime of collecting optics. Mass-limited targets, such as small droplet, allow to reduce contamination of chamber environment and mirror surface damage. However, reducing droplet size limits EUV power output. Our analysis showed the requirement for the target parameters and chamber conditions to achieve 500 W EUV output for HVM. The HEIGHTS package was used for the simulations of laser produced plasma evolution starting from laser interaction with solid target, development and expansion of vapor/plasma plume with accurate optical data calculation, especially in narrow EUV region. Detailed 3D modeling of mix environment including evolution and interplay of plasma produced by lasers from Sn target and plasma produced by in-band and out-of-band EUV radiation in ambient gas, used for the collecting optics protection and cleaning, allowed predicting conditions in entire LPP system. Effect of these conditions on EUV photon absorption and collection was analyzed. This work is supported by the National Science Foundation, PIRE project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaskowiak, J; Ahmad, S; Ali, I
Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were usedmore » to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased significantly and that limited image quality and poor correlation between the motion amplitude and DVF was obtained.« less
Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin
The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.
NASA Astrophysics Data System (ADS)
Oberberg, Moritz; Styrnoll, Tim; Ries, Stefan; Bienholz, Stefan; Awakowicz, Peter
2015-09-01
Reactive sputter processes are used for the deposition of hard, wear-resistant and non-corrosive ceramic layers such as aluminum oxide (Al2O3) . A well known problem is target poisoning at high reactive gas flows, which results from the reaction of the reactive gas with the metal target. Consequently, the sputter rate decreases and secondary electron emission increases. Both parameters show a non-linear hysteresis behavior as a function of the reactive gas flow and this leads to process instabilities. This work presents a new control method of Al2O3 deposition in a multiple frequency CCP (MFCCP) based on plasma parameters. Until today, process controls use parameters such as spectral line intensities of sputtered metal as an indicator for the sputter rate. A coupling between plasma and substrate is not considered. The control system in this work uses a new plasma diagnostic method: The multipole resonance probe (MRP) measures plasma parameters such as electron density by analyzing a typical resonance frequency of the system response. This concept combines target processes and plasma effects and directly controls the sputter source instead of the resulting target parameters.
Measurement of Muon Neutrino Quasielastic Scattering on Carbon
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.
2008-01-01
The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Validating models of target acquisition performance in the dismounted soldier context
NASA Astrophysics Data System (ADS)
Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.
2018-04-01
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.
Using task dynamics to quantify the affordances of throwing for long distance and accuracy.
Wilson, Andrew D; Weightman, Andrew; Bingham, Geoffrey P; Zhu, Qin
2016-07-01
In 2 experiments, the current study explored how affordances structure throwing for long distance and accuracy. In Experiment 1, 10 expert throwers (from baseball, softball, and cricket) threw regulation tennis balls to hit a vertically oriented 4 ft × 4 ft target placed at each of 9 locations (3 distances × 3 heights). We measured their release parameters (angle, speed, and height) and showed that they scaled their throws in response to changes in the target's location. We then simulated the projectile motion of the ball and identified a continuous subspace of release parameters that produce hits to each target location. Each subspace describes the affordance of our target to be hit by a tennis ball moving in a projectile motion to the relevant location. The simulated affordance spaces showed how the release parameter combinations required for hits changed with changes in the target location. The experts tracked these changes in their performance and were successful in hitting the targets. We next tested unusual (horizontal) targets that generated correspondingly different affordance subspaces to determine whether the experts would track the affordance to generate successful hits. Do the experts perceive the affordance? They do. In Experiment 2, 5 cricketers threw to hit either vertically or horizontally oriented targets and successfully hit both, exhibiting release parameters located within the requisite affordance subspaces. We advocate a task dynamical approach to the study of affordances as properties of objects and events in the context of tasks as the future of research in this area. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Büttner, Kathrin; Krieter, Joachim
2018-08-01
The analysis of trade networks as well as the spread of diseases within these systems focuses mainly on pure animal movements between farms. However, additional data included as edge weights can complement the informational content of the network analysis. However, the inclusion of edge weights can also alter the outcome of the network analysis. Thus, the aim of the study was to compare unweighted and weighted network analyses of a pork supply chain in Northern Germany and to evaluate the impact on the centrality parameters. Five different weighted network versions were constructed by adding the following edge weights: number of trade contacts, number of delivered livestock, average number of delivered livestock per trade contact, geographical distance and reciprocal geographical distance. Additionally, two different edge weight standardizations were used. The network observed from 2013 to 2014 contained 678 farms which were connected by 1,018 edges. General network characteristics including shortest path structure (e.g. identical shortest paths, shortest path lengths) as well as centrality parameters for each network version were calculated. Furthermore, the targeted and the random removal of farms were performed in order to evaluate the structural changes in the networks. All network versions and edge weight standardizations revealed the same number of shortest paths (1,935). Between 94.4 to 98.9% of the unweighted network and the weighted network versions were identical. Furthermore, depending on the calculated centrality parameters and the edge weight standardization used, it could be shown that the weighted network versions differed from the unweighted network (e.g. for the centrality parameters based on ingoing trade contacts) or did not differ (e.g. for the centrality parameters based on the outgoing trade contacts) with regard to the Spearman Rank Correlation and the targeted removal of farms. The choice of standardization method as well as the inclusion or exclusion of specific farm types (e.g. abattoirs) can alter the results significantly. These facts have to be considered when centrality parameters are to be used for the implementation of prevention and control strategies in the case of an epidemic. Copyright © 2018 Elsevier B.V. All rights reserved.
Systematic parameter inference in stochastic mesoscopic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Li, Zhen
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less
NASA Technical Reports Server (NTRS)
Houston, W. R.; Stephenson, D. G.; Measures, R. M.
1975-01-01
A laboratory investigation has been conducted to evaluate the detection and identification capabilities of laser induced fluorescence as a remote sensing technique for the marine environment. The relative merits of fluorescence parameters including emission and excitation profiles, intensity and lifetime measurements are discussed in relation to the identification of specific targets of the marine environment including crude oils, refined petroleum products, fish oils and algae. Temporal profiles displaying the variation of lifetime with emission wavelength have proven to add a new dimension of specificity and simplicity to the technique.
Multifunctional biocompatible coatings on magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Bychkova, A. V.; Sorokina, O. N.; Rosenfeld, M. A.; Kovarski, A. L.
2012-11-01
Methods for coating formation on magnetic nanoparticles used in biology and medicine are considered. Key requirements to the coatings are formulated, namely, biocompatibility, stability, the possibility of attachment of pharmaceutical agents, and the absence of toxicity. The behaviour of nanoparticle/coating nanosystems in the body including penetration through cellular membranes and the excretion rates and routes is analyzed. Parameters characterizing the magnetic properties of these systems and their magnetic controllability are described. Factors limiting the applications of magnetically controlled nanosystems for targeted drug delivery are discussed. The bibliography includes 405 references.
Doble, Brett; Tan, Marcus; Harris, Anthony; Lorgelly, Paula
2015-02-01
The successful use of a targeted therapy is intrinsically linked to the ability of a companion diagnostic to correctly identify patients most likely to benefit from treatment. The aim of this study was to review the characteristics of companion diagnostics that are of importance for inclusion in an economic evaluation. Approaches for including these characteristics in model-based economic evaluations are compared with the intent to describe best practice methods. Five databases and government agency websites were searched to identify model-based economic evaluations comparing a companion diagnostic and subsequent treatment strategy to another alternative treatment strategy with model parameters for the sensitivity and specificity of the companion diagnostic (primary synthesis). Economic evaluations that limited model parameters for the companion diagnostic to only its cost were also identified (secondary synthesis). Quality was assessed using the Quality of Health Economic Studies instrument. 30 studies were included in the review (primary synthesis n = 12; secondary synthesis n = 18). Incremental cost-effectiveness ratios may be lower when the only parameter for the companion diagnostic included in a model is the cost of testing. Incorporating the test's accuracy in addition to its cost may be a more appropriate methodological approach. Altering the prevalence of the genetic biomarker, specific population tested, type of test, test accuracy and timing/sequence of multiple tests can all impact overall model results. The impact of altering a test's threshold for positivity is unknown as it was not addressed in any of the included studies. Additional quality criteria as outlined in our methodological checklist should be considered due to the shortcomings of standard quality assessment tools in differentiating studies that incorporate important test-related characteristics and those that do not. There is a need to refine methods for incorporating the characteristics of companion diagnostics into model-based economic evaluations to ensure consistent and transparent reimbursement decisions are made.
Hetzel, Juergen; Boeckeler, Michael; Horger, Marius; Ehab, Ahmed; Kloth, Christopher; Wagner, Robert; Freitag, Lutz; Slebos, Dirk-Jan; Lewis, Richard Alexander; Haentschel, Maik
2017-01-01
Lung volume reduction (LVR) improves breathing mechanics by reducing hyperinflation. Lobar selection usually focuses on choosing the most destroyed emphysematous lobes as seen on an inspiratory CT scan. However, it has never been shown to what extent these densitometric CT parameters predict the least deflation of an individual lobe during expiration. The addition of expiratory CT analysis allows measurement of the extent of lobar air trapping and could therefore provide additional functional information for choice of potential treatment targets. To determine lobar vital capacity/lobar total capacity (LVC/LTC) as a functional parameter for lobar air trapping using on an inspiratory and expiratory CT scan. To compare lobar selection by LVC/LTC with the established morphological CT density parameters. 36 patients referred for endoscopic LVR were studied. LVC/LTC, defined as delta volume over maximum volume of a lobe, was calculated using inspiratory and expiratory CT scans. The CT morphological parameters of mean lung density (MLD), low attenuation volume (LAV), and 15th percentile of Hounsfield units (15%P) were determined on an inspiratory CT scan for each lobe. We compared and correlated LVC/LTC with MLD, LAV, and 15%P. There was a weak correlation between the functional parameter LVC/LTC and all inspiratory densitometric parameters. Target lobe selection using lowest lobar deflation (lowest LVC/LTC) correlated with target lobe selection based on lowest MLD in 18 patients (50.0%), with the highest LAV in 13 patients (36.1%), and with the lowest 15%P in 12 patients (33.3%). CT-based measurement of deflation (LVC/LTC) as a functional parameter correlates weakly with all densitometric CT parameters on a lobar level. Therefore, morphological criteria based on inspiratory CT densitometry partially reflect the deflation of particular lung lobes, and may be of limited value as a sole predictor for target lobe selection in LVR.
NASA Astrophysics Data System (ADS)
Khudik, Vladimir; Yi, S. Austin; Shvets, Gennady
2012-10-01
Acceleration of ions in the two-specie composite target irradiated by a circularly polarized laser pulse is studied analytically and via particle-in-cell (PIC) simulations. A self-consistent analytical model of the composite target is developed. In this model, target parameters are stationary in the center of mass of the system: heavy and light ions are completely separated from each other and form two layers, while electrons are bouncing in the potential well formed by the laser ponderomotive and electrostatic potentials. They are distributed in the direction of acceleration by the Boltzmann law and over velocities by the Maxwell-Juttner law. The laser pulse interacts directly only with electrons in a thin sheath layer, and these electrons transfer the laser pressure to the target ions. In the fluid approximation it is shown, the composite target is still susceptible to the Rayleigh-Taylor instability [1]. Using PIC simulations we found the growth rate of initially seeded perturbations as a function of their wavenumber for different composite target parameters and compare it with analytical results. Useful scaling laws between this rate and laser pulse pressure and target parameters are discussed.[4pt] [1] T.P. Yu, A. Pukhov, G. Shvets, M. Chen, T. H. Ratliff, S. A. Yi, and V. Khudik, Phys. Plasmas, 18, 043110 (2011).
Tanguay, J; Hou, X; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A
2015-05-21
Cyclotron production of (99m)Tc through the (100)Mo(p,2n) (99m)Tc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional (99m)Tc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity (99m)Tc. However, variations in proton beam currents and the thickness and isotopic composition of enriched (100)Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute (99m)Tc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including (100)Mo target thicknesses and proton beam currents, and reproducibility of absolute (99m)Tc yields (defined as the end of bombardment (EOB) (99m)Tc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB (99m)Tc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in (99m)Tc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of (99m)Tc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced (99m)Tc.
NASA Astrophysics Data System (ADS)
Tanguay, J.; Hou, X.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.
2015-05-01
Cyclotron production of 99mTc through the 100Mo(p,2n) 99mTc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional 99mTc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity 99mTc. However, variations in proton beam currents and the thickness and isotopic composition of enriched 100Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute 99mTc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including 100Mo target thicknesses and proton beam currents, and reproducibility of absolute 99mTc yields (defined as the end of bombardment (EOB) 99mTc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB 99mTc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in 99mTc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of 99mTc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the minimum daily activity requirement. The irradiation parameters that would be required to achieve these service rates are described. We believe the developed formalism will aid in the development of quality-control criteria required to ensure consistent supply of large quantities of high-radionuclidic-purity cyclotron-produced 99mTc.
Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth
2010-08-01
Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against
Stochastic Game Approach to Guidance Design
1988-09-09
maneuverable aircraft, which can employ also electronic counter measures, is formulated as an imper- fect information zero-sum pursuit-evasion game played...on Differential Game Applications [36]. However, some other examples which included " electronic jinking", indicated that in an ECM environment a mixed...radius, b) missile/target maneuver ratio, c) nonlinear maneuver similarity parameter (aE T2 max d) normalized end- game duration, e ) ini’tial end- game
High-energy laser weapons: technology overview
NASA Astrophysics Data System (ADS)
Perram, Glen P.; Marciniak, Michael A.; Goda, Matthew
2004-09-01
High energy laser (HEL) weapons are ready for some of today"s most challenging military applications. For example, the Airborne Laser (ABL) program is designed to defend against Theater Ballistic Missiles in a tactical war scenario. Similarly, the Tactical High Energy Laser (THEL) program is currently testing a laser to defend against rockets and other tactical weapons. The Space Based Laser (SBL), Advanced Tactical Laser (ATL) and Large Aircraft Infrared Countermeasures (LAIRCM) programs promise even greater applications for laser weapons. This technology overview addresses both strategic and tactical roles for HEL weapons on the modern battlefield and examines current technology limited performance of weapon systems components, including various laser device types, beam control systems, atmospheric propagation, and target lethality issues. The characteristics, history, basic hardware, and fundamental performance of chemical lasers, solid state lasers and free electron lasers are summarized and compared. The elements of beam control, including the primary aperture, fast steering mirror, deformable mirrors, wavefront sensors, beacons and illuminators will be discussed with an emphasis on typical and required performance parameters. The effects of diffraction, atmospheric absorption, scattering, turbulence and thermal blooming phenomenon on irradiance at the target are described. Finally, lethality criteria and measures of weapon effectiveness are addressed. The primary purpose of the presentation is to define terminology, establish key performance parameters, and summarize technology capabilities.
Zhang, Xinyu; Zhao, Liang; Wang, Yexin; Xu, Yunping; Zhou, Liping
2013-07-01
Preparative capillary GC (PCGC) is a powerful tool for the separation and purification of compounds from any complex matrix, which can be used for compound-specific radiocarbon analysis. However, the effect of PCGC parameters on the trapping efficiency is not well understood. Here, we present a comprehensive study on the optimization of parameters based on 11 reference compounds with different physicochemical properties. Under the optimum conditions, the trapping efficiencies of these 11 compounds (including high-boiling-point n-hentriacontane and methyl lignocerate) are about 80% (60-89%). The isolation of target compounds from standard solutions, plant and soil samples demonstrates that our optimized method is applicable for different classes of compounds including n-alkanes, fatty acid esters, long-chain fatty alcohol esters, polycyclic aromatic hydrocarbons (PAHs) and steranes. By injecting 25 μL in large volume injection mode, over 100 μg, high purity (>90%) target compounds are harvested within 24 h. The recovery ranges of two real samples are about 70% (59.9-83.8%) and about 83% (77.2-88.5%), respectively. Compared to previous studies, our study makes significant improvement in the recovery of PCGC, which is important for its wide application in biogeochemistry, environmental sciences, and archaeology. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Polarization differences in airborne ground penetrating radar performance for landmine detection
NASA Astrophysics Data System (ADS)
Dogaru, Traian; Le, Calvin
2016-05-01
The U.S. Army Research Laboratory (ARL) has investigated the ultra-wideband (UWB) radar technology for detection of landmines, improvised explosive devices and unexploded ordnance, for over two decades. This paper presents a phenomenological study of the radar signature of buried landmines in realistic environments and the performance of airborne synthetic aperture radar (SAR) in detecting these targets as a function of multiple parameters: polarization, depression angle, soil type and burial depth. The investigation is based on advanced computer models developed at ARL. The analysis includes both the signature of the targets of interest and the clutter produced by rough surface ground. Based on our numerical simulations, we conclude that low depression angles and H-H polarization offer the highest target-to-clutter ratio in the SAR images and therefore the best radar performance of all the scenarios investigated.
New designs of LMJ targets for early ignition experiments
NASA Astrophysics Data System (ADS)
C-Clérouin, C.; Bonnefille, M.; Dattolo, E.; Fremerye, P.; Galmiche, D.; Gauthier, P.; Giorla, J.; Laffite, S.; Liberatore, S.; Loiseau, P.; Malinie, G.; Masse, L.; Poggi, F.; Seytor, P.
2008-05-01
The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 40 laser quads, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness are then designed for this purpose. A first strategy is to use scaled-down cylindrical hohlraums and capsules, taking advantage of our better understanding of the problem, set on theoretical modelling, simulations and experiments. Another strategy is to work specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, which is with parametric instabilities a crucial drawback of indirect drive. An alternative design is proposed, made up of the nominal 60 quads capsule, named A1040, in a rugby-shaped hohlraum. Robustness evaluations of these different targets are in progress.
Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.
2014-06-01
We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Study of target and non-target interplay in spatial attention task.
Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree
2018-02-01
Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.
Measuring Rates of Herbicide Metabolism in Dicot Weeds with an Excised Leaf Assay
Ma, Rong; Skelton, Joshua J.; Riechers, Dean E.
2015-01-01
In order to isolate and accurately determine rates of herbicide metabolism in an obligate-outcrossing dicot weed, waterhemp (Amaranthus tuberculatus), we developed an excised leaf assay combined with a vegetative cloning strategy to normalize herbicide uptake and remove translocation as contributing factors in herbicide-resistant (R) and –sensitive (S) waterhemp populations. Biokinetic analyses of organic pesticides in plants typically include the determination of uptake, translocation (delivery to the target site), metabolic fate, and interactions with the target site. Herbicide metabolism is an important parameter to measure in herbicide-resistant weeds and herbicide-tolerant crops, and is typically accomplished with whole-plant tests using radiolabeled herbicides. However, one difficulty with interpreting biokinetic parameters derived from whole-plant methods is that translocation is often affected by rates of herbicide metabolism, since polar metabolites are usually not mobile within the plant following herbicide detoxification reactions. Advantages of the protocol described in this manuscript include reproducible, accurate, and rapid determination of herbicide degradation rates in R and S populations, a substantial decrease in the amount of radiolabeled herbicide consumed, a large reduction in radiolabeled plant materials requiring further handling and disposal, and the ability to perform radiolabeled herbicide experiments in the lab or growth chamber instead of a greenhouse. As herbicide resistance continues to develop and spread in dicot weed populations worldwide, the excised leaf assay method developed and described herein will provide an invaluable technique for investigating non-target site-based resistance due to enhanced rates of herbicide metabolism and detoxification. PMID:26383604
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Key parameters design of an aerial target detection system on a space-based platform
NASA Astrophysics Data System (ADS)
Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng
2018-02-01
To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.
Salopiata, Florian; Depner, Sofia; Wäsch, Marvin; Böhm, Martin E.; Mücke, Oliver; Plass, Christoph; Lehmann, Wolf D.; Kreutz, Clemens; Timmer, Jens; Klingmüller, Ursula
2016-01-01
Lung cancer, with its most prevalent form non-small-cell lung carcinoma (NSCLC), is one of the leading causes of cancer-related deaths worldwide, and is commonly treated with chemotherapeutic drugs such as cisplatin. Lung cancer patients frequently suffer from chemotherapy-induced anemia, which can be treated with erythropoietin (EPO). However, studies have indicated that EPO not only promotes erythropoiesis in hematopoietic cells, but may also enhance survival of NSCLC cells. Here, we verified that the NSCLC cell line H838 expresses functional erythropoietin receptors (EPOR) and that treatment with EPO reduces cisplatin-induced apoptosis. To pinpoint differences in EPO-induced survival signaling in erythroid progenitor cells (CFU-E, colony forming unit-erythroid) and H838 cells, we combined mathematical modeling with a method for feature selection, the L1 regularization. Utilizing an example model and simulated data, we demonstrated that this approach enables the accurate identification and quantification of cell type-specific parameters. We applied our strategy to quantitative time-resolved data of EPO-induced JAK/STAT signaling generated by quantitative immunoblotting, mass spectrometry and quantitative real-time PCR (qRT-PCR) in CFU-E and H838 cells as well as H838 cells overexpressing human EPOR (H838-HA-hEPOR). The established parsimonious mathematical model was able to simultaneously describe the data sets of CFU-E, H838 and H838-HA-hEPOR cells. Seven cell type-specific parameters were identified that included for example parameters for nuclear translocation of STAT5 and target gene induction. Cell type-specific differences in target gene induction were experimentally validated by qRT-PCR experiments. The systematic identification of pathway differences and sensitivities of EPOR signaling in CFU-E and H838 cells revealed potential targets for intervention to selectively inhibit EPO-induced signaling in the tumor cells but leave the responses in erythroid progenitor cells unaffected. Thus, the proposed modeling strategy can be employed as a general procedure to identify cell type-specific parameters and to recommend treatment strategies for the selective targeting of specific cell types. PMID:27494133
Glandular radiation dose in tomosynthesis of the breast using tungsten targets.
Sechopoulos, Ioannis; D'Orsi, Carl J
2008-10-24
With the advent of new detector technology, digital tomosynthesis imaging of the breast has, in the past few years, become a technique intensely investigated as a replacement for planar mammography. As with all other x-ray-based imaging methods, radiation dose is of utmost concern in the development of this new imaging technology. For virtually all development and optimization studies, knowledge of the radiation dose involved in an imaging protocol is necessary. A previous study characterized the normalized glandular dose in tomosynthesis imaging and its variation with various breast and imaging system parameters. This characterization was performed with x-ray spectra generated by molybdenum and rhodium targets. In the recent past, many preliminary patient studies of tomosynthesis imaging have been reported in which the x-ray spectra were generated with x-ray tubes with tungsten targets. The differences in x-ray distribution among spectra from these target materials make the computation of new normalized glandular dose values for tungsten target spectra necessary. In this study we used previously obtained monochromatic normalized glandular dose results to obtain spectral results for twelve different tungsten target x-ray spectra. For each imaging condition, two separate values were computed: the normalized glandular dose for the zero degree projection angle (DgN0), and the ratio of the glandular dose for non-zero projection angles to the glandular dose for the zero degree projection (the relative glandular dose, RGD(alpha)). It was found that DgN0 is higher for tungsten target x-ray spectra when compared with DgN0 values for molybdenum and rhodium target spectra of both equivalent tube voltage and first half value layer. Therefore, the DgN0 for the twelve tungsten target x-ray spectra and different breast compositions and compressed breast thicknesses simulated are reported. The RGD(alpha) values for the tungsten spectra vary with the parameters studied in a similar manner to that found for the molybdenum and rhodium target spectra. The surface fit equations and the fit coefficients for RGD(alpha) included in the previous study were also found to be appropriate for the tungsten spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xingyuan; He, Zhili; Zhou, Jizhong
2005-10-30
The oligonucleotide specificity for microarray hybridizationcan be predicted by its sequence identity to non-targets, continuousstretch to non-targets, and/or binding free energy to non-targets. Mostcurrently available programs only use one or two of these criteria, whichmay choose 'false' specific oligonucleotides or miss 'true' optimalprobes in a considerable proportion. We have developed a software tool,called CommOligo using new algorithms and all three criteria forselection of optimal oligonucleotide probes. A series of filters,including sequence identity, free energy, continuous stretch, GC content,self-annealing, distance to the 3'-untranslated region (3'-UTR) andmelting temperature (Tm), are used to check each possibleoligonucleotide. A sequence identity is calculated based onmore » gapped globalalignments. A traversal algorithm is used to generate alignments for freeenergy calculation. The optimal Tm interval is determined based on probecandidates that have passed all other filters. Final probes are pickedusing a combination of user-configurable piece-wise linear functions andan iterative process. The thresholds for identity, stretch and freeenergy filters are automatically determined from experimental data by anaccessory software tool, CommOligo_PE (CommOligo Parameter Estimator).The program was used to design probes for both whole-genome and highlyhomologous sequence data. CommOligo and CommOligo_PE are freely availableto academic users upon request.« less
Analogical and category-based inference: a theoretical integration with Bayesian causal models.
Holyoak, Keith J; Lee, Hee Seung; Lu, Hongjing
2010-11-01
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ammigan, Kavin; et al.
The RaDIATE collaboration (Radiation Damage In Accelerator Target Environments) was founded in 2012 to bring together the high-energy accelerator target and nuclear materials communities to address the challenging issue of radiation damage effects in beam-intercepting materials. Success of current and future high intensity accelerator target facilities requires a fundamental understanding of these effects including measurement of materials property data. Toward this goal, the RaDIATE collaboration organized and carried out a materials irradiation run at the Brookhaven Linac Isotope Producer facility (BLIP). The experiment utilized a 181 MeV proton beam to irradiate several capsules, each containing many candidate material samples formore » various accelerator components. Materials included various grades/alloys of beryllium, graphite, silicon, iridium, titanium, TZM, CuCrZr, and aluminum. Attainable peak damage from an 8-week irradiation run ranges from 0.03 DPA (Be) to 7 DPA (Ir). Helium production is expected to range from 5 appm/DPA (Ir) to 3,000 appm/DPA (Be). The motivation, experimental parameters, as well as the post-irradiation examination plans of this experiment are described.« less
Electron impact ionization of cycloalkanes, aldehydes, and ketones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Dhanoj; Antony, Bobby, E-mail: bka.ism@gmail.com
The theoretical calculations of electron impact total ionization cross section for cycloalkane, aldehyde, and ketone group molecules are undertaken from ionization threshold to 2 keV. The present calculations are based on the spherical complex optical potential formalism and complex scattering potential ionization contribution method. The results of most of the targets studied compare fairly well with the recent measurements, wherever available and the cross sections for many targets are predicted for the first time. The correlation between the peak of ionization cross sections with number of target electrons and target parameters is also reported. It was found that the crossmore » sections at their maximum depend linearly with the number of target electrons and with other target parameters, confirming the consistency of the values reported here.« less
Optimal directed searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning
2016-03-01
Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.
Dasgupta, Nilanjan; Carin, Lawrence
2005-04-01
Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.
Jin, Gaowa; Guo, Zhimou; Xiao, Yuansheng; Yan, Jingyu; Dong, Xuefang; Shen, Aijin; Wang, Chaoran; Liang, Xinmiao
2016-10-01
A practical method was established for the definition of chromatographic parameters in preparative liquid chromatography. The parameters contained both the peak broadening level under different amounts of sample loading and the concentration distribution of the target compound in the elution. The parameters of the peak broadening level were defined and expressed as a matrix, which consisted of sample loading, the forward broadening and the backward broadening levels. The concentration distribution of the target compound was described by the heat map of the elution profile. The most suitable stationary phase should exhibit the narrower peak broadening and it was best to broaden to both sides to compare to the peak under analytical conditions. Besides, the concentration distribution of the target compounds should be focused on the middle of the elution. The guiding principles were validated by purification of amitriptyline from the mixture of desipramine and amitriptyline. On the selected column, when the content of the impurity desipramine was lower than 0.1%, the recovery of target compound was much higher than the other columns even when the sample loading was as high as 8.03 mg/cm 3 . The parameters and methods could be used for the evaluation and selection of stationary phases in preparative chromatography. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Targeted therapy using nanotechnology: focus on cancer
Sanna, Vanna; Pala, Nicolino; Sechi, Mario
2014-01-01
Recent advances in nanotechnology and biotechnology have contributed to the development of engineered nanoscale materials as innovative prototypes to be used for biomedical applications and optimized therapy. Due to their unique features, including a large surface area, structural properties, and a long circulation time in blood compared with small molecules, a plethora of nanomaterials has been developed, with the potential to revolutionize the diagnosis and treatment of several diseases, in particular by improving the sensitivity and recognition ability of imaging contrast agents and by selectively directing bioactive agents to biological targets. Focusing on cancer, promising nanoprototypes have been designed to overcome the lack of specificity of conventional chemotherapeutic agents, as well as for early detection of precancerous and malignant lesions. However, several obstacles, including difficulty in achieving the optimal combination of physicochemical parameters for tumor targeting, evading particle clearance mechanisms, and controlling drug release, prevent the translation of nanomedicines into therapy. In spite of this, recent efforts have been focused on developing functionalized nanoparticles for delivery of therapeutic agents to specific molecular targets overexpressed on different cancer cells. In particular, the combination of targeted and controlled-release polymer nanotechnologies has resulted in a new programmable nanotherapeutic formulation of docetaxel, namely BIND-014, which recently entered Phase II clinical testing for patients with solid tumors. BIND-014 has been developed to overcome the limitations facing delivery of nanoparticles to many neoplasms, and represents a validated example of targeted nanosystems with the optimal biophysicochemical properties needed for successful tumor eradication. PMID:24531078
NASA Astrophysics Data System (ADS)
Vasconcelos, Ivan; Ozmen, Neslihan; van der Neut, Joost; Cui, Tianci
2017-04-01
Travelling wide-bandwidth seismic waves have long been used as a primary tool in exploration seismology because they can probe the subsurface over large distances, while retaining relatively high spatial resolution. The well-known Born resolution limit often seems to be the lower bound on spatial imaging resolution in real life examples. In practice, data acquisition cost, time constraints and other factors can worsen the resolution achieved by wavefield imaging. Could we obtain images whose resolution beats the Born limits? Would it be practical to achieve it, and what are we missing today to achieve this? In this talk, we will cover aspects of linear and nonlinear seismic imaging to understand elements that play a role in obtaining "super-resolved" seismic images. New redatuming techniques, such as the Marchenko method, enable the retrieval of subsurface fields that include multiple scattering interactions, while requiring relatively little knowledge of model parameters. Together with new concepts in imaging, such as Target-Enclosing Extended Images, these new redatuming methods enable new targeted imaging frameworks. We will make a case as to why target-oriented approaches to reconstructing subsurface-domain wavefields from surface data may help in increasing the resolving power of seismic imaging, and in pushing the limits on parameter estimation. We will illustrate this using a field data example. Finally, we will draw connections between seismic and other imaging modalities, and discuss how this framework could be put to use in other applications
VizieR Online Data Catalog: Binary white dwarfs atmospheric parameters (Gianninas+, 2014)
NASA Astrophysics Data System (ADS)
Gianninas, A.; Dufour, P.; Kilic, M.; Brown, W. R.; Bergeron, P.; Hermes, J. J.
2017-04-01
The sample that we analyze includes a total of 61 ELM WD binaries from the ELM Survey (Brown et al. 2013, J/ApJ/769/66). The bulk of this sample is comprised of the 58 ELM WDs listed in Table 3 of Brown et al. (2013, J/ApJ/769/66), but also includes three additional ELM WDs that have been published in separate papers since then. The spectra of these 61 ELM WDs were obtained using five distinct setups on two different telescopes. A total of 57 targets were observed with the 6.5m MMT telescope with the Blue Channel spectrograph (Schmidt et al. 1989PASP..101..713S). The four remaining targets were observed using the Fred Lawrence Whipple Observatory's (FLWO) 1.5m Tilinghast telescope equipped with the FAST spectrograph (Fabricant et al. 1998PASP..110...79F) and the 600 line/mm grating. (2 data files).
Recent Developments and Applications of the CHARMM force fields
Zhu, Xiao; Lopes, Pedro E.M.; MacKerell, Alexander D.
2011-01-01
Empirical force fields commonly used to describe the condensed phase properties of complex systems such as biological macromolecules are continuously being updated. Improvements in quantum mechanical (QM) methods used to generate target data, availability of new experimental target data, incorporation of new classes of compounds and new theoretical developments (eg. polarizable methods) make force-field development a dynamic domain of research. Accordingly, a number of improvements and extensions of the CHARMM force fields have occurred over the years. The objective of the present review is to provide an up-to-date overview of the CHARMM force fields. A limited presentation on the historical aspects of force fields will be given, including underlying methodologies and principles, along with a brief description of the strategies used for parameter development. This is followed by information on the CHARMM additive and polarizable force fields, including examples of recent applications of those force fields. PMID:23066428
Properties of targeted preamplification in DNA and cDNA quantification.
Andersson, Daniel; Akrap, Nina; Svec, David; Godfrey, Tony E; Kubista, Mikael; Landberg, Göran; Ståhlberg, Anders
2015-01-01
Quantification of small molecule numbers often requires preamplification to generate enough copies for accurate downstream enumerations. Here, we studied experimental parameters in targeted preamplification and their effects on downstream quantitative real-time PCR (qPCR). To evaluate different strategies, we monitored the preamplification reaction in real-time using SYBR Green detection chemistry followed by melting curve analysis. Furthermore, individual targets were evaluated by qPCR. The preamplification reaction performed best when a large number of primer pairs was included in the primer pool. In addition, preamplification efficiency, reproducibility and specificity were found to depend on the number of template molecules present, primer concentration, annealing time and annealing temperature. The amount of nonspecific PCR products could also be reduced about 1000-fold using bovine serum albumin, glycerol and formamide in the preamplification. On the basis of our findings, we provide recommendations how to perform robust and highly accurate targeted preamplification in combination with qPCR or next-generation sequencing.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.
2015-01-01
Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642
Feng, Qishuai; Shen, Yajing; Fu, Yingjie; Muroski, Megan E.; Zhang, Peng; Wang, Qiaoyue; Xu, Chang; Lesniak, Maciej S.; Li, Gang; Cheng, Yu
2017-01-01
Inorganic nanoparticles with unique physical properties have been explored as nanomedicines for brain tumor treatment. However, the clinical applications of the inorganic formulations are often hindered by the biological barriers and failure to be bioeliminated. The size of the nanoparticle is an essential design parameter which plays a significant role to affect the tumor targeting and biodistribution. Here, we report a feasible approach for the assembly of gold nanoparticles into ~80 nm nanospheres as a drug delivery platform for enhanced retention in brain tumors with the ability to be dynamically switched into the single formulation for excretion. These nanoassemblies can target epidermal growth factor receptors on cancer cells and are responsive to tumor microenvironmental characteristics, including high vascular permeability and acidic and redox conditions. Anticancer drug release was controlled by a pH-responsive mechanism. Intracellular L-glutathione (GSH) triggered the complete breakdown of nanoassemblies to single gold nanoparticles. Furthermore, in vivo studies have shown that nanospheres display enhanced tumor-targeting efficiency and therapeutic effects relative to single-nanoparticle formulations. Hence, gold nanoassemblies present an effective targeting strategy for brain tumor treatment. PMID:28638474
Feng, Qishuai; Shen, Yajing; Fu, Yingjie; Muroski, Megan E; Zhang, Peng; Wang, Qiaoyue; Xu, Chang; Lesniak, Maciej S; Li, Gang; Cheng, Yu
2017-01-01
Inorganic nanoparticles with unique physical properties have been explored as nanomedicines for brain tumor treatment. However, the clinical applications of the inorganic formulations are often hindered by the biological barriers and failure to be bioeliminated. The size of the nanoparticle is an essential design parameter which plays a significant role to affect the tumor targeting and biodistribution. Here, we report a feasible approach for the assembly of gold nanoparticles into ~80 nm nanospheres as a drug delivery platform for enhanced retention in brain tumors with the ability to be dynamically switched into the single formulation for excretion. These nanoassemblies can target epidermal growth factor receptors on cancer cells and are responsive to tumor microenvironmental characteristics, including high vascular permeability and acidic and redox conditions. Anticancer drug release was controlled by a pH-responsive mechanism. Intracellular L-glutathione (GSH) triggered the complete breakdown of nanoassemblies to single gold nanoparticles. Furthermore, in vivo studies have shown that nanospheres display enhanced tumor-targeting efficiency and therapeutic effects relative to single-nanoparticle formulations. Hence, gold nanoassemblies present an effective targeting strategy for brain tumor treatment.
Clustering analysis of moving target signatures
NASA Astrophysics Data System (ADS)
Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto
2010-04-01
Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.
Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design.
Laukens, Debby; Brinkman, Brigitta M; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter
2016-01-01
Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host-microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. © FEMS 2015.
Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design
Laukens, Debby; Brinkman, Brigitta M.; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter
2015-01-01
Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host–microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. PMID:26323480
Physical and Chemical Strategies for Therapeutic Delivery by Using Polymeric Nanoparticles
Morachis, José M.; Mahmoud, Enas A.
2012-01-01
A significant challenge that most therapeutic agents face is their inability to be delivered effectively. Nanotechnology offers a solution to allow for safe, high-dose, specific delivery of pharmaceuticals to the target tissue. Nanoparticles composed of biodegradable polymers can be designed and engineered with various layers of complexity to achieve drug targeting that was unimaginable years ago by offering multiple mechanisms to encapsulate and strategically deliver drugs, proteins, nucleic acids, or vaccines while improving their therapeutic index. Targeting of nanoparticles to diseased tissue and cells assumes two strategies: physical and chemical targeting. Physical targeting is a strategy enabled by nanoparticle fabrication techniques. It includes using size, shape, charge, and stiffness among other parameters to influence tissue accumulation, adhesion, and cell uptake. New methods to measure size, shape, and polydispersity will enable this field to grow and more thorough comparisons to be made. Physical targeting can be more economically viable when certain fabrication techniques are used. Chemical targeting can employ molecular recognition units to decorate the surface of particles or molecular units responsive to diseased environments or remote stimuli. In this review, we describe sophisticated nanoparticles designed for tissue-specific chemical targeting that use conjugation chemistry to attach targeting moieties. Furthermore, we describe chemical targeting using stimuli responsive nanoparticles that can respond to changes in pH, heat, and light. PMID:22544864
NASA Astrophysics Data System (ADS)
Masoudi, S. Farhad; Rasouli, Fatemeh S.
2015-08-01
Recent studies in BNCT have focused on investigating appropriate neutron sources as alternatives for nuclear reactors. As the most prominent facilities, the electron linac based photoneutron sources benefit from two consecutive reactions, (e, γ) and (γ, n). The photoneutron sources designed so far are composed of bipartite targets which involve practical problems and are far from the objective of achieving an optimized neutron source. This simulation study deals with designing a compact, optimized, and geometrically simple target for a photoneutron source based on an electron linac. Based on a set of MCNPX simulations, tungsten is found to have the potential of utilizing as both photon converter and photoneutron target. Besides, it is shown that an optimized dimension for such a target slows-down the produced neutrons toward the desired energy range while keeping them economy, which makes achieving the recommended criteria for BNCT of deep-tumors more available. This multi-purpose target does not involve complicated designing, and can be considered as a significant step toward finding application of photoneutron sources for in-hospital treatments. In order to shape the neutron beam emitted from such a target, the beam is planned to pass through an optimized arrangement of materials composed of moderators, filters, reflector, and collimator. By assessment with the recommended in-air parameters, it is shown that the designed beam provides high intensity of desired neutrons, as well as low background contamination. The last section of this study is devoted to investigate the performance of the resultant beam in deep tissue. A typical simulated liver tumor, located within a phantom of human body, was subjected to the irradiation of the designed spectrum. The dosimetric results, including evaluated depth-dose curves and carried out in-phantom parameters show that the proposed configuration establishes acceptable agreement between the appropriate neutron intensity, and penetrating deep in tissue in a reasonable treatment time.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Direct match data flow machine apparatus and process for data driven computing
Davidson, G.S.; Grafe, V.G.
1997-08-12
A data flow computer and method of computing are disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Data flow machine for data driven computing
Davidson, G.S.; Grafe, V.G.
1988-07-22
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information from an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ''fire'' signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Data flow machine for data driven computing
Davidson, George S.; Grafe, Victor G.
1995-01-01
A data flow computer which of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow machine apparatus and process for data driven computing
Davidson, George S.; Grafe, Victor Gerald
1997-01-01
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow memory for data driven computing
Davidson, George S.; Grafe, Victor Gerald
1997-01-01
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.
Direct match data flow memory for data driven computing
Davidson, G.S.; Grafe, V.G.
1997-10-07
A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.
Revised scaling laws for asteroid disruptions
NASA Astrophysics Data System (ADS)
Jutzi, M.
2014-07-01
Models for the evolution of small-body populations (e.g., the asteroid main belt) of the solar system compute the time-dependent size and velocity distributions of the objects as a result of both collisional and dynamical processes. A scaling parameter often used in such numerical models is the critical specific impact energy Q^*_D, which results in the escape of half of the target's mass in a collision. The parameter Q^*_D is called the catastrophic impact energy threshold. We present recent improvements of the Smooth Particle Hydrodynamics (SPH) technique (Benz and Asphaug 1995, Jutzi et al. 2008, Jutzi 2014) for the modeling of the disruption of small bodies. Using the improved models, we then systematically study the effects of various target properties (e.g., strength, porosity, and friction) on the outcome of disruptive collisions (Figure), and we compute the corresponding Q^*_D curves as a function of target size. For a given specific impact energy and impact angle, the outcome of a collision in terms of Q^*_D does not only depend on the properties of the bodies involved, but also on the impact velocity and the size ratio of target/impactor. Leinhardt and Stewardt (2012) proposed scaling laws to predict the outcome of collisions with a wide range of impact velocities (m/s to km/s), target sizes and target/impactor mass ratios. These scaling laws are based on a "principal disruption curve" defined for collisions between equal-sized bodies: Q^*_{RD,γ = 1} = c^* 4/5 π ρ G R_{C1}^2, where the parameter c^* is a measure of the dissipation of energy within the target, R_{C1} the radius of a body with the combined mass of target and projectile and a density ρ = 1000 kg/m^3, and γ is the mass ratio. The dissipation parameter c^* is proposed to be 5±2 for bodies with strength and 1.9±0.3 for hydrodynamic bodies (Leinhardt and Stewardt 2012). We will present values for c^* based on our SPH simulations using various target properties and impact conditions. We will also discuss the validity of the principal disruption curve (with a single parameter c^*) for a wide range of sizes and impact velocities. Our preliminary results indicate that for a given target, c^* can vary significantly (by a factor of ˜ 10) as the impact velocity changes from subsonic to supersonic.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Current techniques for the real-time processing of complex radar signatures
NASA Astrophysics Data System (ADS)
Clay, E.
A real-time processing technique has been developed for the microwave receiver of the Brahms radar station. The method allows such target signatures as the radar cross section (RCS) of the airframes and rotating parts, the one-dimensional tomography of aircraft, and the RCS of electromagnetic decoys to be characterized. The method allows optimization of experimental parameters including the analysis frequency band, the receiver gain, and the wavelength range of EM analysis.
Lightcurve Photometry Opportunities: 2018 April-June
NASA Astrophysics Data System (ADS)
Warner, Brian D.; Harris, Alan W.; Durech, Josef; Benner, Lance A. M.
2018-04-01
We present lists of asteroid photometry opportunities for objects reaching a favorable apparition and having either none or poorly-defined lightcurve parameters. Additional data on these objects will help with shape and spin axis modeling via lightcurve inversion. We also include lists of objects that will be the target of radar observations. Lightcurves for these objects can help constrain pole solutions and/or remove rotation period ambiguities that might not come from using radar data alone.
Lightcurve Photometry Opportunities: 2018 July-September
NASA Astrophysics Data System (ADS)
Warner, Brian D.; Harris, Alan W.; Durech, Josef; Benner, Lance A. M.
2018-07-01
We present lists of asteroid photometry opportunities for objects reaching a favorable apparition and having either none or poorly-defined lightcurve parameters. Additional data on these objects will help with shape and spin axis modeling via lightcurve inversion. We also include lists of objects that will be the target of radar observations. Lightcurves for these objects can help constrain pole solutions and/or remove rotation period ambiguities that might not come from using radar data alone.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.
Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna
2016-09-01
Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.
Stereotactic multibeam radiation therapy system in a PACS environment
NASA Astrophysics Data System (ADS)
Fresne, Francoise; Le Gall, G.; Barillot, Christian; Gibaud, Bernard; Manens, Jean-Pierre; Toumoulin, Christine; Lemoine, Didier; Chenal, C.; Scarabin, Jean-Marie
1991-05-01
A Multibeam radiation therapy treatment is a non-invasive technique devoted to treat a lesion within the cerebral medium by focusing photon-beams on the same target from a high number of entrance points. We present here a computer assisted dosimetric planning procedure which includes: (1) an analysis module to define the target volume by using 2D and 3D displays, (2) a planing module to issue a treatment strategy including the dosimetric simulations and (3) a treatment module setting up the parameters to order the robotized treatment system (i.e. chair- framework, radiation unit machine). Another important feature of this system is its connection to the PACS system SIRENE settled in the University hospital of Rennes which makes possible the archiving and the communication of the multimodal images (CT, MRI, Angiography) used by this application. The corporate use of stereotactic methods and the multimodality imagery ensures spatial coherence and makes the target definition and the cognition of the structures environment more accurate. The dosimetric planning suited to the spatial reference (i.e. the stereotactic frame) guarantees an optimal distribution of the dose computed by an original 3D volumetric algorithm. The robotic approach of the treatment stage has consisted to design a computer driven chair-framework cluster to position the target volume at the radiation unit isocenter.
NASA Astrophysics Data System (ADS)
Monnin, Carole; Bach, Pierre; Tulle, Pierre Alain; van Rompay, Marc; Ballanger, Anne
2002-03-01
As a neutron tube manufacturer, SODERN is now in charge of manufacturing tritium targets for accelerators, in cooperation with CEA/DAM/DTMN in Valduc. Specific deuterium and tritium targets are manufactured on request, according to the requirements of the users, starting from titanium targets on copper substrates, and going to more sophisticated devices. The range of possible uses is wide, including thin targets for neutron calibration, thick targets with controlled loading of deuterium and tritium, rotating targets or large size rotating targets for higher lifetimes. The activity of the targets ranges from 3.7×10 10 to 3.7×10 13 Bq (1-1000 Ci), the diameter being up to 30 cm. Sodern and the CEA/Valduc centre have developed different technologies for tritium target manufacture, allowing the selection of the best configuration for each kind of use. In order to optimize the production of high energy neutrons, the performance of tritide and deuteride titanium targets made by different processes has been studied experimentally by bombardment with 120 and 350 kV deuterons provided by electrostatic accelerators. It is then possible to optimize either neutron output or lifetime and stability or thermal behaviour. The importance of the deposit evaporation conditions on the efficiency of neutron emission is clearly demonstrated, as well as the thermomechanical stability of the Ti thin film under deuteron bombardment. The main parameters involved in the target performance are discussed from a thermodynamical approach.
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooney, K; Altman, M; Garcia-Ramirez, J
Purpose: Treatment planning guidelines for accelerated partial breast irradiation (ABPI) using the strut-adjusted volume implant (SAVI) are inconsistent between the manufacturer and NSABP B-39/RTOG 0413 protocol. Furthermore neither set of guidelines accounts for different applicator sizes. The purpose of this work is to establish guidelines specific to the SAVI that are based on clinically achievable dose distributions. Methods: Sixty-two consecutive patients were implanted with a SAVI and prescribed to receive 34 Gy in 10 fractions twice daily using high dose-rate (HDR) Ir-192 brachytherapy. The target (PTV-EVAL) was defined per NSABP. The treatments were planned and evaluated using a combination ofmore » dosimetric planning goals provided by the NSABP, the manufacturer, and our prior clinical experience. Parameters evaluated included maximum doses to skin and ribs, and volumes of PTV-EVAL receiving 90%, 95%, 100%, 150%, and 200% of the prescription (V90, etc). All target parameters were evaluated for correlation with device size using the Pearson correlation coefficient. Revised dosimetric guidelines for target coverage and heterogeneity were determined from this population. Results: Revised guidelines for minimum target coverage (ideal in parentheses): V90≥95%(97%), V95≥90%(95%), V100≥88%(91%). The only dosimetric parameters that were significantly correlated (p<0.05) with device size were V150 and V200. Heterogeneity criteria were revised for the 6–1 Mini/6-1 applicators to V150≤30cc and V200≤15cc, and unchanged for the other sizes. Re-evaluation of patient plans showed 90% (56/62) met the revised minimum guidelines and 76% (47/62) met the ideal guidelines. All and 56/62 patients met our institutional guidelines for maximum skin and rib dose, respectively. Conclusions: We have optimized dosimetric guidelines for the SAVI applicators, and found that implementation of these revised guidelines for SAVI treatment planning yielded target coverage exceeding that required by existing guidelines while preserving heterogeneity constraints and minimizing dose to organs at risk.« less
Rinfret, Félix; Lambert, France; Youmbissi, Joseph Tchetagni; Arcand, Jean-François; Turcot, Richard; Bessette, Maral Alimardani; Bourque, Solange; Moreau, Vincent; Tousignant, Karine; Deschênes, Diane; Cloutier, Lyne
2018-01-01
The implementation of advanced chronic kidney disease (CKD) multidisciplinary clinics has now demonstrated their effectiveness in delaying and even avoiding dialysis for patients with CKD. However, very little has been documented on the management and achievement of targets for a number of parameters in this context. Our goal was to assess our multidisciplinary clinic therapy performance in relation to the targets for hypertension, anemia, and calcium phosphate assessment. A cross-sectional descriptive study was conducted with a cohort including all patients followed up in our multidisciplinary clinic in July 2014. Comorbidity, laboratory, and clinical data were collected and compared with the recommendations of scientific organizations. The cohort included 128 patients, 37.5% of whom were women. Mean follow-up time was 26.6 ± 25.1 months and mean estimated glomerular filtration rate (eGFR) was 14.0 ± 4.7 mL/min/1.73 m 2 . A total of 24.2% of patients with diabetes achieved blood pressure targets of <130/80 mm Hg, while 56.5% of patients without diabetes achieved targets of <140/90 mm Hg. Hemoglobin of patients treated with erythropoiesis-stimulating agents was 100 to 110 g/L in 36.2% of the patients, below 100 for 39.7% of them, and above 110 for 24.1%, whereas 67.2% were within the acceptable limits of 95 to 115 g/L. In addition, 63.4% of patients had a serum phosphate of <1.5 mmol/L, and 90.9% of patients had total serum calcium <2.5 mmol/L. Our study is a single center study with the majority of our patients being Caucasian. This limits the generalizability of our findings. The control rates of various parameters were satisfactory given the difficult clinical context, but could be optimized. We publish these data in the hope that they are helpful to others engaged in quality improvement in their own programs or more generally.
Splash control of drop impacts with geometric targets.
Juarez, Gabriel; Gastopoulos, Thomai; Zhang, Yibin; Siegel, Michael L; Arratia, Paulo E
2012-02-01
Drop impacts on solid and liquid surfaces exhibit complex dynamics due to the competition of inertial, viscous, and capillary forces. After impact, a liquid lamella develops and expands radially, and under certain conditions, the outer rim breaks up into an irregular arrangement of filaments and secondary droplets. We show experimentally that the lamella expansion and subsequent breakup of the outer rim can be controlled by length scales that are of comparable dimension to the impacting drop diameter. Under identical impact parameters (i.e., fluid properties and impact velocity) we observe unique splashing dynamics by varying the target cross-sectional geometry. These behaviors include (i) geometrically shaped lamellae and (ii) a transition in splashing stability, from regular to irregular splashing. We propose that regular splashes are controlled by the azimuthal perturbations imposed by the target cross-sectional geometry and that irregular splashes are governed by the fastest-growing unstable Plateau-Rayleigh mode.
GPCR homomers and heteromers: a better choice as targets for drug development than GPCR monomers?
Casadó, Vicent; Cortés, Antoni; Mallol, Josefa; Pérez-Capote, Kamil; Ferré, Sergi; Lluis, Carmen; Franco, Rafael; Canela, Enric I
2009-11-01
G protein-coupled receptors (GPCR) are targeted by many therapeutic drugs marketed to fight against a variety of diseases. Selection of novel lead compounds are based on pharmacological parameters obtained assuming that GPCR are monomers. However, many GPCR are expressed as dimers/oligomers. Therefore, drug development may consider GPCR as homo- and hetero-oligomers. A two-state dimer receptor model is now available to understand GPCR operation and to interpret data obtained from drugs interacting with dimers, and even from mixtures of monomers and dimers. Heteromers are distinct entities and therefore a given drug is expected to have different affinities and different efficacies depending on the heteromer. All these concepts would lead to broaden the therapeutic potential of drugs targeting GPCRs, including receptor heteromer-selective drugs with a lower incidence of side effects, or to identify novel pharmacological profiles using cell models expressing receptor heteromers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, D.; Landsberger, S.; Buchholz, B.
1995-09-01
Recent experimental results on testing and modification of the Cintichem process to allow substitution of low enriched uranium (LEU) for high enriched uranium (HEU) targets are presented in this report. The main focus is on {sup 99}Mo recovery and purification by its precipitation with {alpha}-benzoin oxime. Parameters that were studied include concentrations of nitric and sulfuric acids, partial neutralization of the acids, molybdenum and uranium concentrations, and the ratio of {alpha}-benzoin oxime to molybdenum. Decontamination factors for uranium, neptunium, and various fission products were measured. Experiments with tracer levels of irradiated LEU were conducted for testing the {sup 99}Mo recoverymore » and purification during each step of the Cintichem process. Improving the process with additional processing steps was also attempted. The results indicate that the conversion of molybdenum chemical processing from HEU to LEU targets is possible.« less
Particle-in-Cell Modeling of Magnetron Sputtering Devices
NASA Astrophysics Data System (ADS)
Cary, John R.; Jenkins, T. G.; Crossette, N.; Stoltz, Peter H.; McGugan, J. M.
2017-10-01
In magnetron sputtering devices, ions arising from the interaction of magnetically trapped electrons with neutral background gas are accelerated via a negative voltage bias to strike a target cathode. Neutral atoms ejected from the target by such collisions then condense on neighboring material surfaces to form a thin coating of target material; a variety of industrial applications which require thin surface coatings are enabled by this plasma vapor deposition technique. In this poster we discuss efforts to simulate various magnetron sputtering devices using the Vorpal PIC code in 2D axisymmetric cylindrical geometry. Field solves are fully self-consistent, and discrete models for sputtering, secondary electron emission, and Monte Carlo collisions are included in the simulations. In addition, the simulated device can be coupled to an external feedback circuit. Erosion/deposition profiles and steady-state plasma parameters are obtained, and modifications due to self consistency are seen. Computational performance issues are also discussed. and Tech-X Corporation.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2017-02-01
Chlorine disinfection of biologically treated wastewater is practiced in many locations prior to environmental discharge or beneficial reuse. The effectiveness of chlorine disinfection processes may be influenced by several factors, such as pH, temperature, ionic strength, organic carbon concentration, and suspended solids. We investigated the use of Bayesian multilayer perceptron (BMLP) models as efficient and practical tools for compiling and analysing free chlorine and monochloramine virus disinfection performance as a multivariate problem. Corresponding to their relative susceptibility, Adenovirus 2 was used to assess disinfection by monochloramine and Coxsackievirus B5 was used for free chlorine. A BMLP model was constructed to relate key disinfection conditions (CT, pH, turbidity) to observed Log Reduction Values (LRVs) for these viruses at constant temperature. The models proved to be valuable for incorporating uncertainty in the chlor(am)ination performance estimation and interpolating between operating conditions. Various types of queries could be performed with this model including the identification of target CT for a particular combination of LRV, pH and turbidity. Similarly, it was possible to derive achievable LRVs for combinations of CT, pH and turbidity. These queries yielded probability density functions for the target variable reflecting the uncertainty in the model parameters and variability of the input variables. The disinfection efficacy was greatly impacted by pH and to a lesser extent by turbidity for both types of disinfections. Non-linear relationships were observed between pH and target CT, and turbidity and target CT, with compound effects on target CT also evidenced. This work demonstrated that the use of BMLP models had considerable ability to improve the resolution and understanding of the multivariate relationships between operational parameters and disinfection outcomes for wastewater treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Experiment research on infrared targets signature in mid and long IR spectral bands
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting
2013-09-01
Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.
Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David
2018-05-03
Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.
Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis
NASA Astrophysics Data System (ADS)
Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad
2016-07-01
Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
Hematologic and serum chemistry reference intervals for free-ranging lions (Panthera leo).
Maas, Miriam; Keet, Dewald F; Nielen, Mirjam
2013-08-01
Hematologic and serum chemistry values are used by veterinarians and wildlife researchers to assess health status and to identify abnormally high or low levels of a particular blood parameter in a target species. For free-ranging lions (Panthera leo) information about these values is scarce. In this study 7 hematologic and 11 serum biochemistry values were evaluated from 485 lions from the Kruger National Park, South Africa. Significant differences between sexes and sub-adult (≤ 36 months) and adult (>36 months) lions were found for most of the blood parameters and separate reference intervals were made for those values. The obtained reference intervals include the means of the various blood parameter values measured in captive lions, except for alkaline phosphatase in the subadult group. These reference intervals can be utilized for free-ranging lions, and may likely also be used as reference intervals for captive lions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ward round template: enhancing patient safety on ward rounds.
Gilliland, Niall; Catherwood, Natalie; Chen, Shaouyn; Browne, Peter; Wilson, Jacob; Burden, Helena
2018-01-01
Concerns had been raised at clinical governance regarding the safety of our inpatient ward rounds with particular reference to: documentation of clinical observations and National Early Warning Score (NEWS), compliance with Trust guidance for venous thromboembolism (VTE) risk assessment, antibiotic stewardship, palliative care and treatment escalation plans (TEP). This quality improvement project was conceived to ensure these parameters were considered and documented during the ward round, thereby improving patient care and safety. These parameters were based on Trust patient safety guidance and CQUIN targets. The quality improvement technique of plan-do-study-act (PDSA) was used in this project. We retrospectively reviewed ward round entries to record baseline measurements, based on the above described parameters, prior to making any changes. Following this, the change applied was the introduction of a ward round template to include the highlighted important baseline parameters. Monthly PDSA cycles are performed, and baseline measurements are re-examined, then relevant changes were made to the ward round template. Documentation of baseline measurements was poor prior to introduction of the ward round template; this improved significantly following introduction of a standardised ward round template. Following three cycles, documentation of VTE risk assessments increased from 14% to 92%. Antibiotic stewardship documentation went from 0% to 100%. Use of the TEP form went from 29% to 78%. Following introduction of the ward round template, compliance improved significantly in all safety parameters. Important safety measures being discussed on ward rounds will lead to enhanced patient safety and will improve compliance to Trust guidance and comissioning for quality and innovation (CQUIN) targets. Ongoing change implementation will focus on improving compliance with usage of the template on all urology ward rounds.
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
NASA Astrophysics Data System (ADS)
Kókai, Zsófia; Török, Szabina; Zagyvai, Péter; Kiselev, Daniela; Moormann, Rainer; Börcsök, Endre; Zanini, Luca; Takibayev, Alan; Muhrer, Günter; Bevilacqua, Riccardo; Janik, József
2018-02-01
Different target options have been examined for the European Spallation Source, which is under construction in Lund, Sweden. During the design update phase, parameters and characteristics for the target design have been optimized not only for neutronics but also with respect to the waste characteristics related to the final disposal of the target. A rotating, solid tungsten target was eventually selected as baseline concept; the other options considered included mercury and lead-bismuth (LBE) targets suitable for a pulsed source. Since the licensee is obliged to present a decommissioning plan even before the construction phase starts, the radioactive waste category of the target after full operation time is of crucial importance. The results obtained from a small survey among project partners of 7th Framework Program granted by EU 202247 contract have been used. Waste characteristics of different potential spallation target materials were compared. Based on waste index, the tungsten target is the best alternative and the second one is the mercury target. However, all alternatives have HLW category after a 10 year cooling. Based on heat generation alone all of the options would be below the HLW limit after this cooling period. The LBE is the least advantageous alternative based on waste index and heat generation comparison. These results can be useful in compiling the licensing documents of the ESS facility as the target alternatives can be compared from various aspects related to their disposal.
NASA Astrophysics Data System (ADS)
Li, Chenguang; Yang, Xianjun
2016-10-01
The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.
Yi, WenJun; Wang, Ping; Fu, MeiCheng; Tan, JiChun; Zhu, Jubo; Li, XiuJian
2017-07-10
In order to overcome the shortages of the target image restoration method for longitudinal laser tomography using self-calibration, a more general restoration method through backscattering medium images associated with prior parameters is developed for common conditions. The system parameters are extracted from pre-calibration, and the LIDAR ratio is estimated according to the medium types. Assisted by these prior parameters, the degradation caused by inhomogeneous turbid media can be established with the backscattering medium images, which can further be used for removal of the interferences of turbid media. The results of simulations and experiments demonstrate that the proposed image restoration method can effectively eliminate the inhomogeneous interferences of turbid media and achieve exactly the reflectivity distribution of targets behind inhomogeneous turbid media. Furthermore, the restoration method can work beyond the limitation of the previous method that only works well under the conditions of localized turbid attenuations and some types of targets with fairly uniform reflectivity distributions.
Liang, Zhiqiang; Wei, Jianming; Zhao, Junyu; Liu, Haitao; Li, Baoqing; Shen, Jie; Zheng, Chunlei
2008-01-01
This paper presents a new algorithm making use of kurtosis, which is a statistical parameter, to distinguish the seismic signal generated by a person's footsteps from other signals. It is adaptive to any environment and needs no machine study or training. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, we can separate different targets based on the seismic waves they generate. The parameter of kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by person footsteps than other signals generated by vehicles, winds, noise, etc. The parameter of kurtosis is usually employed in the financial analysis, but rarely used in other fields. In this paper, we make use of kurtosis to distinguish person from other targets based on its different sensitivity to different signals. Simulation and application results show that this algorithm is very effective in distinguishing person from other targets. PMID:27873804
Improved Characteristics of Laser Source of Ions Using a Frequency Mode Laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaydarov, R. T.
2008-04-07
We used a mass-spectrometric method to investigate the characteristics of laser-produced plasma ions depending on the nature of the target and on the parameters of the laser radiation. Experiments are carried out on porous Y{sub 2}O{sub 3} targets with different densities {rho}, subjected to a laser radiation, where the laser works in a frequency mode (v = l-12 Hz). We found that the laser frequency has a significant effect on the parameters of plasma ions: with increasing the frequency of the laser the charge, energy and intensity of ions increase for a given parameters of the target. This effect ismore » more pronounced for small densities of the target. We related these two effects to a non-linear ionization process in the plasma due to the formation of dense plasma volume inside the sample absorbing the laser radiation and to the change of the focusing conditions in the case of the frequency mode laser.« less
Optical Imaging and Radiometric Modeling and Simulation
NASA Technical Reports Server (NTRS)
Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.
2010-01-01
OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Tan, Benjamin; Chen, Ziyang; Sege, Jon; Wang, Changhong; Rubin, Yoram
2018-01-01
Modeling of uncertainty associated with subsurface dynamics has long been a major research topic. Its significance is widely recognized for real-life applications. Despite the huge effort invested in the area, major obstacles still remain on the way from theory and applications. Particularly problematic here is the confusion between modeling uncertainty and modeling spatial variability, which translates into a (mis)conception, in fact an inconsistency, in that it suggests that modeling of uncertainty and modeling of spatial variability are equivalent, and as such, requiring a lot of data. This paper investigates this challenge against the backdrop of a 7 km, deep underground tunnel in China, where environmental impacts are of major concern. We approach the data challenge by pursuing a new concept for Rapid Impact Modeling (RIM), which bypasses altogether the need to estimate posterior distributions of model parameters, focusing instead on detailed stochastic modeling of impacts, conditional to all information available, including prior, ex-situ information and in-situ measurements as well. A foundational element of RIM is the construction of informative priors for target parameters using ex-situ data, relying on ensembles of well-documented sites, pre-screened for geological and hydrological similarity to the target site. The ensembles are built around two sets of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. In another variation to common Bayesian practice, we update the priors to obtain conditional distributions of the target (environmental impact) dependent variables and not the hydrological variables. This recognizes that goal-oriented site characterization is in many cases more useful in applications compared to parameter-oriented characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wuzhe; Lin, Zhixiong; Yang, Zhining
2015-06-15
Flattening filter-free (FFF) radiation beams have recently become clinically available on modern linear accelerators in radiation therapy. This study aimed to evaluate the dosimetric impact of using FFF beams in intensity-modulated radiotherapy (IMRT) for early-stage upper thoracic oesophageal cancer. Eleven patients with primary stage upper thoracic oesophageal cancer were recruited. For each patient, two IMRT plans were computed using conventional beams (Con-P) and FFF beams (FFF-P), respectively. Both plans employed a five-beam arrangement and were prescribed with 64 Gy to (planning target volume) PTV1 and 54 Gy to PTV2 in 32 fractions using 6 MV photons. The dose parameters ofmore » the target volumes and organs at risks (OARs), and treatment parameters including the monitor units (MU) and treatment time (TT) for Con-P and FFF-P were recorded and compared. The mean D{sub 5} of PTV1 and PTV2 were higher in FFF-P than Con-P by 0.4 Gy and 0.3 Gy, respectively. For the OARs, all the dose parameters did not show significant difference between the two plans except the mean V{sub 5} and V{sub 10} of the lung in which the FFF-P was lower (46.7% vs. 47.3% and 39.1% vs. 39.6%, respectively). FFF-P required 54% more MU but 18.4% less irradiation time when compared to Con-P. The target volume and OARs dose distributions between the two plans were comparable. However, FFF-P was more effective in sparing the lung from low dose and reduced the mean TT compared with Con-P. Long-term clinical studies are suggested to evaluate the radiobiological effects of FFF beams.« less
Upper Limb Kinematics in Stroke and Healthy Controls Using Target-to-Target Task in Virtual Reality.
Hussain, Netha; Alt Murphy, Margit; Sunnerhagen, Katharina S
2018-01-01
Kinematic analysis using virtual reality (VR) environment provides quantitative assessment of upper limb movements. This technique has rarely been used in evaluating motor function in stroke despite its availability in stroke rehabilitation. To determine the discriminative validity of VR-based kinematics during target-to-target pointing task in individuals with mild or moderate arm impairment following stroke and in healthy controls. Sixty-seven participants with moderate (32-57 points) or mild (58-65 points) stroke impairment as assessed with Fugl-Meyer Assessment for Upper Extremity were included from the Stroke Arm Longitudinal study at the University of Gothenburg-SALGOT cohort of non-selected individuals within the first year of stroke. The stroke groups and 43 healthy controls performed the target-to-target pointing task, where 32 circular targets appear one after the other and disappear when pointed at by the haptic handheld stylus in a three-dimensional VR environment. The kinematic parameters captured by the stylus included movement time, velocities, and smoothness of movement. The movement time, mean velocity, and peak velocity were discriminative between groups with moderate and mild stroke impairment and healthy controls. The movement time was longer and mean and peak velocity were lower for individuals with stroke. The number of velocity peaks, representing smoothness, was also discriminative and significantly higher in both stroke groups (mild, moderate) compared to controls. Movement trajectories in stroke more frequently showed clustering (spider's web) close to the target indicating deficits in movement precision. The target-to-target pointing task can provide valuable and specific information about sensorimotor impairment of the upper limb following stroke that might not be captured using traditional clinical scale. The trial was registered with register number NCT01115348 at clinicaltrials.gov, on May 4, 2010. URL: https://clinicaltrials.gov/ct2/show/NCT01115348.
NASA Astrophysics Data System (ADS)
Bundesmann, Carsten; Lautenschläge, Thomas; Spemann, Daniel; Finzel, Annemarie; Mensing, Michael; Frost, Frank
2017-10-01
The correlation between process parameters and properties of TiO2 films grown by ion beam sputter deposition from a ceramic target was investigated. TiO2 films were grown under systematic variation of ion beam parameters (ion species, ion energy) and geometrical parameters (ion incidence angle, polar emission angle) and characterized with respect to film thickness, growth rate, structural properties, surface topography, composition, optical properties, and mass density. Systematic variations of film properties with the scattering geometry, namely the scattering angle, have been revealed. There are also considerable differences in film properties when changing the process gas from Ar to Xe. Similar systematics were reported for TiO2 films grown by reactive ion beam sputter deposition from a metal target [C. Bundesmann et al., Appl. Surf. Sci. 421, 331 (2017)]. However, there are some deviations from the previously reported data, for instance, in growth rate, mass density and optical properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z. M.; Laser Fusion Research Center, CAEP, Mianyang 621900; He, X. T.
A complex target (CT) configuration tailored for generating high quality proton bunch by circularly polarized laser pulses at intensities of 10{sup 20-21} W/cm{sup 2} is proposed. Two-dimensional particle-in-cell simulations show that both the collimation and mono-energetic qualities of the accelerated proton bunch obtained using a front-shaped thin foil can be greatly enhanced by the backside inhomogeneous plasma layer. The main mechanisms for improving the accelerated protons are identified and discussed. These include stabilization of the photon cavity, providing hole-boring supplementary acceleration and suppressing the thermal-electron effects. A theory for tailoring the CT parameters is also presented.
The Effect of Stiffness Parameter on Mass Distribution in Heavy-Ion Induced Fission
NASA Astrophysics Data System (ADS)
Soheyli, Saeed; Khalil Khalili, Morteza; Ashrafi, Ghazaaleh
2018-06-01
The stiffness parameter of the composite system has been studied for several heavy-ion induced fission reactions without the contribution of non-compound nucleus fission events. In this research, determination of the stiffness parameter is based on the comparison between the experimental data on the mass widths of fission fragments and those predicted by the statistical model treatments at the saddle and scission points. Analysis of the results shows that for the induced fission reactions of different targets by the same projectile, the stiffness parameter of the composite system decreases with increasing the fissility parameter, as well as with increasing the mass number of the compound nucleus. This parameter also exhibits a similar behavior for the reactions of a given target induced by different projectiles. As expected, nearly same stiffness values are obtained for different reactions leading to the same compound nucleus.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
NASA Astrophysics Data System (ADS)
Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo
2018-05-01
One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.
Reduction of Decision-Making Time in the Air Defense Management
2013-06-01
Cohen, Freeman, & Thompson, 1997), “Threat Evaluation and Weapon Allocation” ( Turan , 2012) and Evaluating the Performance of TEWA Systems (Fredrik...uses these threat values to propose weapon allocation ( Turan , 2012). Turan studied only static based weapon-target allocation. She evaluates and... Turan : - Proximity parameters (CPA, Time to CPA, CPA in units of time, time before hit, distance), - Capability parameters (target type, weapon
Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0
NASA Technical Reports Server (NTRS)
Paresce, Francesco (Editor)
1990-01-01
The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Elastic and inelastic scattering for the 10B+58Ni system at near-barrier energies
NASA Astrophysics Data System (ADS)
Scarduelli, V.; Crema, E.; Guimarães, V.; Abriola, D.; Arazi, A.; de Barbará, E.; Capurro, O. A.; Cardona, M. A.; Gallardo, J.; Hojman, D.; Martí, G. V.; Pacheco, A. J.; Rodrígues, D.; Yang, Y. Y.; Deshmukh, N. N.; Paes, B.; Lubian, J.; Mendes Junior, D. R.; Morcelle, V.; Monteiro, D. S.
2017-11-01
Full angular distributions of the 10B elastically and inelastically scattered by 58Ni have been measured at different energies around the Coulomb barrier. The elastic and inelastic scattering of 10B on a medium mass target has been measured for the first time. The obtained angular distributions have been analyzed in terms of large-scale coupled reaction channel calculations, where several inelastic transitions of the projectile and the target, as well as the most relevant one- and two-step transfer reactions have been included in the coupling matrix. The roles of the spin reorientation, the spin-orbit interaction, and the large ground-state deformation of the 10B, in the reaction mechanism, were also investigated. The real part of the interaction potential between projectile and target was represented by a parameter-free double-folding potential, whereas no imaginary potential at the surface was considered. In this sense, the theoretical calculations were parameter free and their results were compared to experimental data to investigate the relative importance of the different reaction channels. A striking influence of the ground-state spin reorientation of the 10B nucleus was found, while all transfer reactions investigated had a minimum contribution to the dynamics of the system. Finally, the large static deformation of the 10B and the spin-orbit coupling can also play an important role in the system studied.
Tradeoffs among watershed model calibration targets for parameter estimation
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...
Effects of diversity and procrastination in priority queuing theory: The different power law regimes
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2010-01-01
Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law ˜1/tα with 0<α≤1 over a time scale of years. We present a simple model for this persistence phenomenon, framed within the standard priority queuing theory, of a target task which has the lowest priority compared to all other tasks that flow on the computer of an individual. We identify a “time deficit” control parameter β and a bifurcation to a regime where there is a nonzero probability for the target task to never be completed. The distribution of waiting time T until the completion of the target task has the power law tail ˜1/t1/2 , resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population of individuals, the power law tail is changed into 1/tα , with αɛ(0.5,∞) , including the well-known case 1/t . We also study the effect of “procrastination,” defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.
Baker, David R; Kasprzyk-Hordern, Barbara
2011-11-04
The main aim of this manuscript is to provide a comprehensive and critical verification of methodology commonly used for sample collection, storage and preparation in studies concerning the analysis of pharmaceuticals and illicit drugs in aqueous environmental samples with the usage of SPE-LC/MS techniques. This manuscript reports the results of investigations into several sample preparation parameters that to the authors' knowledge have not been reported or have received very little attention. This includes: (i) effect of evaporation temperature and (ii) solvent with regards to solid phase extraction (SPE) extracts; (iii) effect of silanising glassware; (iv) recovery of analytes during vacuum filtration through glass fibre filters and (v) pre LC-MS filter membranes. All of these parameters are vital to develop efficient and reliable extraction techniques; an essential factor given that target drug residues are often present in the aqueous environment at ng L(-1) levels. Presented is also the first comprehensive review of the stability of illicit drugs and pharmaceuticals in wastewater. Among the parameters studied are: time of storage, temperature and pH. Over 60 analytes were targeted including stimulants, opioid and morphine derivatives, benzodiazepines, antidepressants, dissociative anaesthetics, drug precursors, human urine indicators and their metabolites. The lack of stability of analytes in raw wastewater was found to be significant for many compounds. For instance, 34% of compounds studied reported a stability change >15% after only 12 h in raw wastewater stored at 2 °C; a very important finding given that wastewater is typically collected with the use of 24 h composite samplers. The stability of these compounds is also critical given the recent development of so-called 'sewage forensics' or 'sewage epidemiology' in which concentrations of target drug residues in wastewater are used to back-calculate drug consumption. Without an understanding of stability, under (or over) reporting of consumption estimations may take place. Copyright © 2011 Elsevier B.V. All rights reserved.
Process technology and effects of spallation products: Circuit components, maintenance, and handling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigg, B.; Haines, S.J.; Dressler, R.
1996-06-01
Working Session D included an assessment of the status of the technology and components required to: (1) remove impurities from the liquid metal (mercury or Pb-Bi) target flow loop including the effects of spallation products, (2) provide the flow parameters necessary for target operations, and (3) maintain the target system. A series of brief presentations were made to focus the discussion on these issues. The subjects of these presentations, and presenters were: (1) Spallation products and solubilities - R. Dressler; (2) Spallation products for Pb-Bi - Y. Orlov; (3) Clean/up/impurity removal components - B. Sigg; (4) {open_quotes}Road-Map{close_quotes} and remote handlingmore » needs - T. McManamy; (5) Remote handling issues and development - M. Holding. The overall conclusion of this session was that, with the exception of (i) spallation product related processing issues, (ii) helium injection and clean-up, and (iii) specialized remote handling equipment, the technology for all other circuit components (excluding the target itself) exists. Operating systems at the Institute of Physics in Riga, Latvia (O. Lielausis) and at Ben-Gurion University in Beer Shiva, Israel (S. Lesin) have demonstrated that other liquid metal circuit components including pumps, heat exchangers, valves, seals, and piping are readily available and have been reliably used for many years. In the three areas listed above, the designs and analysis are not judged to be mature enough to determine whether and what types of technology development are required. Further design and analysis of the liquid metal target system is therefore needed to define flow circuit processing and remote handling equipment requirements and thereby identify any development needs.« less
NASA Astrophysics Data System (ADS)
Gazagnaire, Julia; Cobb, J. T.; Isaacs, Jason
2015-05-01
There is a desire in the Mine Counter Measure community to develop a systematic method to predict and/or estimate the performance of Automatic Target Recognition (ATR) algorithms that are detecting and classifying mine-like objects within sonar data. Ideally, parameters exist that can be measured directly from the sonar data that correlate with ATR performance. In this effort, two metrics were analyzed for their predictive potential using high frequency synthetic aperture sonar (SAS) images. The first parameter is a measure of contrast. It is essentially the variance in pixel intensity over a fixed partition of relatively small size. An analysis was performed to determine the optimum block size for this contrast calculation. These blocks were then overlapped in the horizontal and vertical direction over the entire image. The second parameter is the one-dimensional K-shape parameter. The K-distribution is commonly used to describe sonar backscatter return from range cells that contain a finite number of scatterers. An Ada-Boosted Decision Tree classifier was used to calculate the probability of classification (Pc) and false alarm rate (FAR) for several types of targets in SAS images from three different data sets. ROC curves as a function of the measured parameters were generated and the correlation between the measured parameters in the vicinity of each of the contacts and the ATR performance was investigated. The contrast and K-shape parameters were considered separately. Additionally, the contrast and K-shape parameter were associated with background texture types using previously labeled high frequency SAS images.
Target design for materials processing very far from equilibrium
NASA Astrophysics Data System (ADS)
Barnard, John J.; Schenkel, Thomas
2016-10-01
Local heating and electronic excitations can trigger phase transitions or novel material states that can be stabilized by rapid quenching. An example on the few nanometer scale are phase transitions induced by the passage of swift heavy ions in solids where nitrogen-vacancy color centers form locally in diamonds when ions heat the diamond matrix to warm dense matter conditions at 0.5 eV. We optimize mask geometries for target materials such as silicon and diamond to induce phase transitions by intense ion pulses (e. g. from NDCX-II or from laser-plasma acceleration). The goal is to rapidly heat a solid target volumetrically and to trigger a phase transition or local lattice reconstruction followed by rapid cooling. The stabilized phase can then be studied ex situ. We performed HYDRA simulations that calculate peak temperatures for a series of excitation conditions and cooling rates of crystal targets with micro-structured masks. A simple analytical model, that includes ion heating and radial, diffusive cooling, was developed that agrees closely with the HYDRA simulations. The model gives scaling laws that can guide the design of targets over a wide range of parameters including those for NDCX-II and the proposed BELLA-i. This work was performed under the auspices of the U.S. DOE under contracts DE-AC52-07NA27344 (LLNL), DE-AC02-05CH11231 (LBNL) and was supported by the US DOE Office of Science, Fusion Energy Sciences. LLNL-ABS-697271.
Büttner, Kathrin; Krieter, Joachim; Traulsen, Arne; Traulsen, Imke
2013-01-01
Centrality parameters in animal trade networks typically have right-skewed distributions, implying that these networks are highly resistant against the random removal of holdings, but vulnerable to the targeted removal of the most central holdings. In the present study, we analysed the structural changes of an animal trade network topology based on the targeted removal of holdings using specific centrality parameters in comparison to the random removal of holdings. Three different time periods were analysed: the three-year network, the yearly and the monthly networks. The aim of this study was to identify appropriate measures for the targeted removal, which lead to a rapid fragmentation of the network. Furthermore, the optimal combination of the removal of three holdings regardless of their centrality was identified. The results showed that centrality parameters based on ingoing trade contacts, e.g. in-degree, ingoing infection chain and ingoing closeness, were not suitable for a rapid fragmentation in all three time periods. More efficient was the removal based on parameters considering the outgoing trade contacts. In all networks, a maximum percentage of 7.0% (on average 5.2%) of the holdings had to be removed to reduce the size of the largest component by more than 75%. The smallest difference from the optimal combination for all three time periods was obtained by the removal based on out-degree with on average 1.4% removed holdings, followed by outgoing infection chain and outgoing closeness. The targeted removal using the betweenness centrality differed the most from the optimal combination in comparison to the other parameters which consider the outgoing trade contacts. Due to the pyramidal structure and the directed nature of the pork supply chain the most efficient interruption of the infection chain for all three time periods was obtained by using the targeted removal based on out-degree. PMID:24069293
Effects of sampling close relatives on some elementary population genetics analyses.
Wang, Jinliang
2018-01-01
Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.
Target-directed catalytic metallodrugs
Joyner, J.C.; Cowan, J.A.
2013-01-01
Most drugs function by binding reversibly to specific biological targets, and therapeutic effects generally require saturation of these targets. One means of decreasing required drug concentrations is incorporation of reactive metal centers that elicit irreversible modification of targets. A common approach has been the design of artificial proteases/nucleases containing metal centers capable of hydrolyzing targeted proteins or nucleic acids. However, these hydrolytic catalysts typically provide relatively low rate constants for target inactivation. Recently, various catalysts were synthesized that use oxidative mechanisms to selectively cleave/inactivate therapeutic targets, including HIV RRE RNA or angiotensin converting enzyme (ACE). These oxidative mechanisms, which typically involve reactive oxygen species (ROS), provide access to comparatively high rate constants for target inactivation. Target-binding affinity, co-reactant selectivity, reduction potential, coordination unsaturation, ROS products (metal-associated vs metal-dissociated; hydroxyl vs superoxide), and multiple-turnover redox chemistry were studied for each catalyst, and these parameters were related to the efficiency, selectivity, and mechanism(s) of inactivation/cleavage of the corresponding target for each catalyst. Important factors for future oxidative catalyst development are 1) positioning of catalyst reduction potential and redox reactivity to match the physiological environment of use, 2) maintenance of catalyst stability by use of chelates with either high denticity or other means of stabilization, such as the square planar geometric stabilization of Ni- and Cu-ATCUN complexes, 3) optimal rate of inactivation of targets relative to the rate of generation of diffusible ROS, 4) targeting and linker domains that afford better control of catalyst orientation, and 5) general bio-availability and drug delivery requirements. PMID:23828584
Toporkiewicz, Monika; Meissner, Justyna; Matusewicz, Lucyna; Czogalla, Aleksander; Sikorski, Aleksander F
2015-01-01
There are many problems directly correlated with the systemic administration of drugs and how they reach their target site. Targeting promises to be a hopeful strategy as an improved means of drug delivery, with reduced toxicity and minimal adverse side effects. Targeting exploits the high affinity of cell-surface-targeted ligands, either directly or as carriers for a drug, for specific retention and uptake by the targeted diseased cells. One of the most important parameters which should be taken into consideration in the selection of an appropriate ligand for targeting is the binding affinity (KD). In this review we focus on the importance of binding affinities of monoclonal antibodies, antibody derivatives, peptides, aptamers, DARPins, and small targeting molecules in the process of selection of the most suitable ligand for targeting of nanoparticles. In order to provide a critical comparison between these various options, we have also assessed each technology format across a range of parameters such as molecular size, immunogenicity, costs of production, clinical profiles, and examples of the level of selectivity and toxicity of each. Wherever possible, we have also assessed how incorporating such a targeted approach compares with, or is superior to, original treatments. PMID:25733832
Machine processing of ERTS and ground truth data
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Peacock, K.
1973-01-01
The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.
Temporal context processing within hippocampal subfields.
Wang, Fang; Diana, Rachel A
2016-07-01
The episodic memory system can differentiate similar events based on the temporal information associated with the events. Temporal context, which is at least partially determined by the events that precede or follow the critical event, may be a cue to differentiate events. The purpose of the present study is to investigate whether the hippocampal dentate gyrus (DG)/CA3 and CA1 subfields are sensitive to changes in temporal context and, if so, whether the subregions show a linear or threshold-like response to similar temporal contexts. Participants incidentally encoded a series of object picture triplets and 20 of them were included in final analyses. The third picture in each triplet was operationally defined as the target and the first two pictures served as temporal context for the target picture. Each target picture was presented twice with temporal context manipulated to be either repeated, high similarity, low similarity, or new on the second presentation. We extracted beta parameters for the repeated target as a function of the type of temporal context. We expected to see repetition suppression, a reduction in the beta values, in response to repetition of the target. If temporal context information is included in the representation of the target within a given region, this repetition suppression should be greater for target images that were preceded by their original context than for target images preceded by a new context. Neuroimaging results showed that CA1, but not DG/CA3, modifies the target's representation based on its temporal context. Right CA1 did not distinguish high similarity temporal context from repeated context but did distinguish low similarity temporal context from repeated context. These results indicate that CA1 is sensitive to temporal context and suggest that it does not differentiate between a substantially similar temporal context and an identical temporal context. In contrast, DG/CA3 does not appear to process temporal context as defined in the current experiment. Copyright © 2016 Elsevier Inc. All rights reserved.
Yin, Li-Jie; Yu, Xiao-Bin; Ren, Yan-Gang; Gu, Guang-Hai; Ding, Tian-Gui; Lu, Zhi
2013-03-18
To investigate the utilization of PET-CT in target volume delineation for three-dimensional conformal radiotherapy in patients with non-small cell lung cancer (NSCLC) and atelectasis. Thirty NSCLC patients who underwent radical radiotherapy from August 2010 to March 2012 were included in this study. All patients were pathologically confirmed to have atelectasis by imaging examination. PET-CT scanning was performed in these patients. According to the PET-CT scan results, the gross tumor volume (GTV) and organs at risk (OARs, including the lungs, heart, esophagus and spinal cord) were delineated separately both on CT and PET-CT images. The clinical target volume (CTV) was defined as the GTV plus a margin of 6-8 mm, and the planning target volume (PTV) as the GTV plus a margin of 10-15mm. An experienced physician was responsible for designing treatment plans PlanCT and PlanPET-CT on CT image sets. 95% of the PTV was encompassed by the 90% isodose curve, and the two treatment plans kept the same beam direction, beam number, gantry angle, and position of the multi-leaf collimator as much as possible. The GTV was compared using a target delineation system, and doses distributions to OARs were compared on the basis of dose-volume histogram (DVH) parameters. The GTVCT and GTVPET-CT had varying degrees of change in all 30 patients, and the changes in the GTVCT and GTVPET-CT exceeded 25% in 12 (40%) patients. The GTVPET-CT decreased in varying degrees compared to the GTVCT in 22 patients. Their median GTVPET-CT and median GTVPET-CT were 111.4 cm3 (range, 37.8 cm3-188.7 cm3) and 155.1 cm3 (range, 76.2 cm3-301.0 cm3), respectively, and the former was 43.7 cm3 (28.2%) less than the latter. The GTVPET-CT increased in varying degrees compared to the GTVCT in 8 patients. Their median GTVPET-CT and median GTVPET-CT were 144.7 cm3 (range, 125.4 cm3-178.7 cm3) and 125.8 cm3 (range, 105.6 cm3-153.5 cm3), respectively, and the former was 18.9 cm3 (15.0%) greater than the latter. Compared to PlanCT parameters, PlanPET-CT parameters showed varying degrees of changes. The changes in lung V20, V30, esophageal V50 and V55 were statistically significant (Ps< 0.05 for all), while the differences in mean lung dose, lung V5, V10, V15, heart V30, mean esophageal dose, esophagus Dmax, and spinal cord Dmax were not significant (Ps> 0.05 for all). PET-CT allows a better distinction between the collapsed lung tissue and tumor tissue, improving the accuracy of radiotherapy target delineation, and reducing radiation damage to the surrounding OARs in NSCLC patients with atelectasis.
LLE Review 117 (October-December 2008)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bittle, W., editor
2009-05-28
This volume of the LLE Review, covering October-December 2008, features 'Demonstration of the Shock-Timing Technique for Ignition Targets at the National Ignition Facility' by T. R. Boehly, V. N. Goncharov, S. X. Hu, J. A. Marozas, T. C. Sangster, D. D. Meyerhofer (LLE), D. Munro, P. M. Celliers, D. G. Hicks, G. W. Collins, H. F. Robey, O. L. Landen (LLNL), and R. E. Olson (SNL). In this article (p. 1) the authors report on a technique to measure the velocity and timing of shock waves in a capsule contained within hohlraum targets. This technique is critical for optimizing themore » drive profiles for high-performance inertial-confinement-fusion capsules, which are compressed by multiple precisely timed shock waves. The shock-timing technique was demonstrated on OMEGA using surrogate hohlraum targets heated to 180 eV and fitted with a re-entrant cone and quartz window to facilitate velocity measurements using velocity interferometry. Cryogenic experiments using targets filled with liquid deuterium further demonstrated the entire timing technique in a hohlraum environment. Direct-drive cryogenic targets with multiple spherical shocks were also used to validate this technique, including convergence effects at relevant pressures (velocities) and sizes. These results provide confidence that shock velocity and timing can be measured in NIF ignition targets, thereby optimizing these critical parameters.« less
Leung, Shui-On; Gao, Kai; Wang, Guang Yu; Cheung, Benny Ka-Wa; Lee, Kwan-Yeung; Zhao, Qi; Cheung, Wing-Tai; Wang, Jun Zhi
2015-01-01
SM03, a chimeric antibody that targets the B-cell restricted antigen CD22, is currently being clinically evaluated for the treatment of lymphomas and other autoimmune diseases in China. SM03 binding to surface CD22 leads to rapid internalization, making the development of an appropriate cell-based bioassay for monitoring changes in SM03 bioactivities during production, purification, storage, and clinical trials difficult. We report herein the development of an anti-idiotype antibody against SM03. Apart from its being used as a surrogate antigen for monitoring SM03 binding affinities, the anti-idiotype antibody was engineered to express as fusion proteins on cell surfaces in a non-internalizing manner, and the engineered cells were used as novel "surrogate target cells" for SM03. SM03-induced complement-mediated cytotoxicity (CMC) against these "surrogate target cells" proved to be an effective bioassay for monitoring changes in Fc functions, including those resulting from minor structural modifications borne within the Fc-appended carbohydrates. The approach can be generally applied for antibodies that target rapidly internalizing or non-surface bound antigens. The combined use of the anti-idiotype antibody and the surrogate target cells could help evaluate clinical parameters associated with safety and efficacies, and possibly the mechanisms of action of SM03.
Small-Animal Imaging Using Diffuse Fluorescence Tomography.
Davis, Scott C; Tichauer, Kenneth M
2016-01-01
Diffuse fluorescence tomography (DFT) has been developed to image the spatial distribution of fluorescence-tagged tracers in living tissue. This capability facilitates the recovery of any number of functional parameters, including enzymatic activity, receptor density, blood flow, and gene expression. However, deploying DFT effectively is complex and often requires years of know-how, especially for newer mutlimodal systems that combine DFT with conventional imaging systems. In this chapter, we step through the process of using MRI-DFT imaging of a receptor-targeted tracer in small animals.
Low-Cost Radar Sensors for Personnel Detection and Tracking in Urban Areas
2007-01-31
progress on the reserach grant "Low-Cost Radar Sensors for Personnel Detection and Tracking in Urban Areas" during the period 1 May 2005 - 31 December...the DOA of target i with respect to the array boresight is given by: O 1sin-1 -/- fD)--F2(. )(1) where d is the spacing between the elements and A, is...wall. A large database was collected for different parameter spaces including number of humans, types of movements, wall types and radar polarization
Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy
Kaddi, Chanchala D; Phan, John H; Wang, May D
2016-01-01
Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967
Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery
NASA Astrophysics Data System (ADS)
Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin
2013-06-01
In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".
Range parameters of slow gold ions implanted into light targets
NASA Astrophysics Data System (ADS)
Kuzmin, V.
2009-08-01
Interatomic potentials for Au-C, Au-B, Au-N and Au-Si systems, calculated with density-functional theory (DFT) methods, have been used to evaluate the range parameters of gold in B, Si, BN and SiC films at energies of about 10-400 keV. The potentials have been employed to describe scattering angles of a projectile and to calculate the nuclear stopping powers and the higher moments of the energy, transferred in single collisions. Utilizing these findings the range parameters have been obtained by the standard transport theory and by Monte-Carlo simulations. A velocity proportional electronic stopping was included into the consideration. The approach developed corresponds completely to the standard classical scheme of the calculation of range parameters. Good agreement between the computed range parameters and available experimental data allow us to conclude that correlation effects between the nuclear and electronic stopping can be neglected in the energy range in question. Moreover, it is proven for the first time that the model by Grande, et al. [P.L. Grande, F.C. Zawislak, D. Fink, M. Behar, Nucl. Instr. and Meth. B 61 (1991) 282], which relies on the importance of correlation effects, contains inherent contradictions.
LDEF's map experiment foil perforations yield hypervelocity impact penetration parameters
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.
1992-01-01
The space exposure of LDEF for 5.75 years, forming a host target in low earth orbit (LEO) orbit to a wide distribution of hypervelocity particulates of varying dimensions and different impact velocities, has yielded a multiplicity of impact features. Although the projectile parameters are generally unknown and, in fact not identical for any two impacts on a target, the great number of impacts provides statistically meaningful basis for the valid comparison of the response of different targets. Given sufficient impacts for example, a comparison of impact features (even without knowledge of the project parameters) is possible between: (1) differing material types (for the same incident projectile distribution); (2) differing target configurations (e.g., thick and thin targets for the same material projectiles; and (3) different velocities (using LDEF's different faces). A comparison between different materials is presented for infinite targets of aluminum, Teflon, and brass in the same pointing direction; the maximum finite-target penetration (ballistic limit) is also compared to that of the penetration of similar materials comprising of a semi-infinite target. For comparison of impacts on similar materials at different velocities, use is made of the pointing direction relative to LDEF's orbital motion. First, however, care must be exercised to separate the effect of spatial flux anisotropies from those resulting from the spacecraft velocity through a geocentrically referenced dust distribution. Data comprising thick and thin target impacts, impacts on different materials, and in different pointing directions is presented; hypervelocity impact parameters are derived. Results are also shown for flux modeling codes developed to decode the relative fluxes of Earth orbital and unbound interplanetary components intercepting LDEF. Modeling shows the west and space pointing faces are dominated by interplanetary particles and yields a mean velocity of 23.5 km/s at LDEF, corresponding to a V(infinity) Earth approach velocity = 20.9 km/s. Normally resolved average impact velocities on LDEF's cardinal point faces are shown. As 'excess' flux on the east, north, and south faces is observed, compatible with an Earth orbital component below some 5 microns in particle diameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Sims, Cianan
TIM is a real-time interactive concentrating solar field simulation. TIM models a concentrating tower (receiver), heliostat field, and potential reflected glare based on user-specified parameters such as field capacity, tower height and location. TIM provides a navigable 3D interface, allowing the user to “fly” around the field to determine the potential glare hazard from off-target heliostats. Various heliostat aiming strategies are available for specifying how heliostats behave when in standby mode. Strategies include annulus, point-per-group, up-aiming and single-point-focus. Additionally, TIM includes an avian path feature for approximating the irradiance and feather temperature of a bird flying through the field airspace.
Repetition priming in selective attention: A TVA analysis.
Ásgeirsson, Árni Gunnar; Kristjánsson, Árni; Bundesen, Claus
2015-09-01
Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter (α) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Single Object & Time Series Spectroscopy with JWST NIRCam
NASA Technical Reports Server (NTRS)
Greene, Tom; Schlawin, Everett A.
2017-01-01
JWST will enable high signal-to-noise spectroscopic observations of the atmospheres of transiting planets with high sensitivity at wavelengths that are inaccessible with HST or other existing facilities. We plan to exploit this by measuring abundances, chemical compositions, cloud properties, and temperature-pressure parameters of a set of mostly warm (T 600 - 1200 K) and low mass (14 -200 Earth mass) planets in our guaranteed time program. These planets are expected to have significant molecular absorptions of H2O, CH4, CO2, CO, and other molecules that are key for determining these parameters and illuminating how and where the planets formed. We describe how we will use the NIRCam grisms to observe slitless transmission and emission spectra of these planets over 2.4 - 5.0 microns wavelength and how well these observations can measure our desired parameters. This will include how we set integration times, exposure parameters, and obtain simultaneous shorter wavelength images to track telescope pointing and stellar variability. We will illustrate this with specific examples showing model spectra, simulated observations, expected information retrieval results, completed Astronomer's Proposal Tools observing templates, target visibility, and other considerations.
CO{sub 2} Laser Ablation Propulsion Area Scaling With Polyoxymethylene Propellant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinko, John E.; Ichihashi, Katsuhiro; Ogita, Naoya
The topic of area scaling is of great importance in the laser propulsion field, including applications to removal of space debris and to selection of size ranges for laser propulsion craft in air or vacuum conditions. To address this issue experimentally, a CO{sub 2} laser operating at up to 10 J was used to irradiate targets. Experiments were conducted in air and vacuum conditions over a range of areas from about 0.05-5 cm{sup 2} to ablate flat polyoxymethylene targets at several fluences. Theoretical effects affecting area scaling, such as rarefaction waves, thermal diffusion, and diffraction, are discussed in terms ofmore » the experimental results. Surface profilometry was used to characterize the ablation samples. A CFD model is used to facilitate analysis, and key results are compared between experimental and model considerations. The dependence of key laser propulsion parameters, including the momentum coupling coefficient and specific impulse, are calculated based on experimental data, and results are compared to existing literature data.« less
Ohta, Yoichi
2017-12-01
The present study aimed to clarify the effects of oncoming target velocities on the ability of rapid force production and accuracy and variability of simultaneous control of both force production intensity and timing. Twenty male participants (age: 21.0 ± 1.4 years) performed rapid gripping with a handgrip dynamometer to coincide with the arrival of an oncoming target by using a horizontal electronic trackway. The oncoming target velocities were 4, 8, and 12 m · s -1 , which were randomly produced. The grip force required was 30% of the maximal voluntary contraction. Although the peak force (Pf) and rate of force development (RFD) increased with increasing target velocity, the value of the RFD to Pf ratio was constant across the 3 target velocities. The accuracy of both force production intensity and timing decreased at higher target velocities. Moreover, the intrapersonal variability in temporal parameters was lower in the fast target velocity condition, but constant variability in 3 target velocities was observed in force intensity parameters. These results suggest that oncoming target velocity does not intrinsically affect the ability for rapid force production. However, the oncoming target velocity affects accuracy and variability of force production intensity and timing during rapid force production.
Maity, Amit Ranjan; Stepensky, David
2015-12-30
Targeting of drug delivery systems (DDSs) to specific intracellular organelles (i.e., subcellular targeting) has been investigated in numerous publications, but targeting efficiency of these systems is seldom reported. We searched scientific publications in the subcellular DDS targeting field and analyzed targeting efficiency and major formulation parameters that affect it. We identified 77 scientific publications that matched the search criteria. In the majority of these studies nanoparticle-based DDSs were applied, while liposomes, quantum dots and conjugates were used less frequently. The nucleus was the most common intracellular target, followed by mitochondrion, endoplasmic reticulum and Golgi apparatus. In 65% of the publications, DDSs surface was decorated with specific targeting residues, but the efficiency of this surface decoration was not analyzed in predominant majority of the studies. Moreover, only 23% of the analyzed publications contained quantitative data on DDSs subcellular targeting efficiency, while the majority of publications reported qualitative results only. From the analysis of publications in the subcellular targeting field, it appears that insufficient efforts are devoted to quantitative analysis of the major formulation parameters and of the DDSs' intracellular fate. Based on these findings, we provide recommendations for future studies in the field of organelle-specific drug delivery and targeting. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vaezi, S.; Mesgari, M. S.; Kaviary, F.
2015-12-01
Todays, stability of human life is threatened by a set of parameters. So sustainable urban development theory is introduced after the stability theory to protect the urban environment. In recent years, sustainable urban development gains a lot of attraction by different sciences and totally becomes a final target for urban development planners and managers to use resources properly and to establish a balanced relationship among human, community, and nature. Proper distribution of services for decreasing spatial inequalities, promoting the quality of living environment, and approaching an urban stability requires an analytical understanding of the present situation. Understanding the present situation is the first step for making a decision and planning effectively. This paper evaluates effective parameters affecting proper arrangement of land-uses using a descriptive-analytical method, to develop a conceptual framework for understanding of the present situation of urban land-uses, based on the assessment of their compatibility. This study considers not only the local parameters, but also spatial parameters are included in this study. The results indicate that land-uses in the zone considered here are not distributed properly. Considering mentioned parameters and distributing service land-uses effectively cause the better use of these land-uses.
Not-so-well-tempered neutralino
NASA Astrophysics Data System (ADS)
Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel
2017-09-01
Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.
Ternary Blends of High Aluminate Cement, Fly ash and Blast-furnace slag for Sewerage Lining Mortar
NASA Astrophysics Data System (ADS)
Chao, L. C.; Kuo, C. P.
2018-01-01
High aluminate cement (HAC), fly ash (FA) and blast-furnace slag (BFS) have been treated sustainable materials for the use of cement products for wastewater infrastructure due to their capabilities of corrosion resistance. The purpose of this study is to optimize a ternary blend of above mentioned materials for a special type of mortar for sewerage lining. By the using of Taguchi method, four control parameters including water/cementitious material ratio, mix water content, fly ash content and blast-furnace slag content were considered in nine trial mix designs in this study. By evaluating target properties including (1) maximization of compressive strength, (2) maximization of electricity resistance and (3) minimization of water absorption rate, the best possible levels for each control parameter were determined and the optimal mix proportions were verified. Through the implementation of the study, a practical and completed idea for designing corrosion resistive mortar comprising HAC, FA and BSF is provided.
Lossless and Sufficient - Invariant Decomposition of Deterministic Target
NASA Astrophysics Data System (ADS)
Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio
2011-03-01
The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.
Eye and Head Movement Characteristics in Free Visual Search of Flight-Simulator Imagery
2010-03-01
conspicuity. However, only gaze amplitude varied significantly with IFOV. A two- parameter (scale and exponent) power function was fitted to the...main-sequence amplitude-duration data. Both parameters varied significantly with target conspicuity, but in opposite directions. Neither parameter ...IFOV. A two- parameter (scale and exponent) power function was fitted to the main-sequence amplitude-duration data. Both parameters varied
9Be scattering with microscopic wave functions and the continuum-discretized coupled-channel method
NASA Astrophysics Data System (ADS)
Descouvemont, P.; Itagaki, N.
2018-01-01
We use microscopic 9Be wave functions defined in a α +α +n multicluster model to compute 9Be+target scattering cross sections. The parameter sets describing 9Be are generated in the spirit of the stochastic variational method, and the optimal solution is obtained by superposing Slater determinants and by diagonalizing the Hamiltonian. The 9Be three-body continuum is approximated by square-integral wave functions. The 9Be microscopic wave functions are then used in a continuum-discretized coupled-channel (CDCC) calculation of 9Be+208Pb and of 9Be+27Al elastic scattering. Without any parameter fitting, we obtain a fair agreement with experiment. For a heavy target, the influence of 9Be breakup is important, while it is weaker for light targets. This result confirms previous nonmicroscopic CDCC calculations. One of the main advantages of the microscopic CDCC is that it is based on nucleon-target interactions only; there is no adjustable parameter. The present work represents a first step towards more ambitious calculations involving heavier Be isotopes.
The effect of kinematic parameters on inelastic scattering of glyoxal.
Duca, Mariana D
2004-10-08
The effect of kinematic parameters (relative velocity v(rel), relative momentum p(rel), and relative energy E(rel)) on the rotational and rovibrational inelastic scatterings of 0(0)K(0)S(1) trans-glyoxal has been investigated by colliding glyoxal seeded in He or Ar with target gases D2, He, or Ne at different scattering angles in crossed supersonic beams. The inelastic spectra for target gases He and D2 acquired with two different sets of kinematic parameters revealed no significant differences. This result shows that kinematic factors have the major influence in the inelastic scattering channel competition whereas the intermolecular potential energy surface plays only a secondary role. The well-defined exponential dependence of relative cross sections on exchanged angular momentum identifies angular momentum as the dominant kinematic factor in collision-induced rotationally and rovibrationally inelastic scatterings. This is supported by the behavior of the relative inelastic cross sections data in a "slope-p(rel)" representation. In this form, the data show a trend nearly independent of the target gas identity. Representations involving E(rel) and v(rel) show trends specific to the target gas.
Pesticide Dose - A Parameter with Many Implications
USDA-ARS?s Scientific Manuscript database
Like pharmaceuticals, pesticides can have unintended effects, even when used at the proper dose. For pesticides, the possible effects are even more diverse, because the chemicals are released immediately into the environment and the dose reaching the intended target(s) and unintended targets can var...
Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.
2015-12-01
There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.
Wang, Yong
2015-01-01
A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) signals after motion compensation. Then, the modified version of Chirplet decomposition (MCD) based on the integrated high order ambiguity function (IHAF) is presented for the parameter estimation of AM-FM signals, and the corresponding high quality instantaneous ISAR images can be obtained from the estimated parameters. Compared with the MCD algorithm based on the generalized cubic phase function (GCPF) in the authors’ previous paper, the novel algorithm presented in this paper is more accurate and efficient, and the results with simulated and real data demonstrate the superiority of the proposed method. PMID:25806870
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang
2017-07-01
Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
NASA Astrophysics Data System (ADS)
Michel, Patrick; Jutzi, Martin; Richardson, Derek C.
2014-11-01
In recent years, we have shown by numerical impact simulations that collisions and gravitational reaccumulation together can explain the formation of asteroid families and satellites (e.g. [1]). We also found that the presence of microporosity influences the outcome of a catastrophic disruption ([2], [3]). The size-frequency distributions (SFDs) resulting from the disruption of 100 km-diameter targets consisting of either monolithic non-porous basalt or non-porous basalt blocks held together by gravity (termed rubble piles by the investigators) has already been determined ([4], [5]). Using the same wide range of collision speeds, impact angles, and impactor sizes, we extended those studies to targets consisting of porous material represented by parameters for pumice. Dark-type asteroid families, such as C-type, are often considered to contain a high fraction of porosity (including microporosity). To determine the impact conditions for dark-type asteroid family formation, a comparison is needed between the actual family SFD and that of impact disruptions of porous bodies. Moreover, the comparison between the disruptions of non-porous, rubble-pile, and porous targets is important to assess the influence of various internal structures on the outcome. Our results show that in terms of largest remnants, in general, the outcomes for porous bodies are more similar to the ones for non-porous targets ([4]) than for rubble-pile targets ([5]). In particular, the latter targets are much weaker (the largest remnants are much smaller). We suspect that this is because the pressure-dependent shear strength between the individual components of the rubble pile is not properly modeled, which makes the body behave more like a fluid than an actual rubble pile. We will present our results and implications in terms of SFDs as well as ejection velocities over the entire considered parameter space. We will also check whether we find good agreement with existing dark-type asteroid families, allowing us to say something about their history. [1] Michel et al. 2001. Science 294, 1696.[2] Jutzi et al. 2008. Icarus 198, 242.[3] Jutzi et al. 2010. Icarus 207, 54.[4] Durda et al. 2007, Icarus 186, 498.[5] Benavidez et al. 2012. Icarus 219, 57.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
New method to design stellarator coils without the winding surface
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-01-01
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Jung, B.; Gillespie, K.; Hemmat, M.; Aslam, A.; Brunfeldt, D.; Dobson, M. C.
1983-01-01
A vegetation and soil-moisture experiment was conducted in order to examine the microwave emission and backscattering from vegetation canopies and soils. The data-acquisition methodology used in conjunction with the mobile radar scatterometer (MRS) systems is described and associated ground-truth data are documented. Test fields were located in the Kansas River floodplain north of Lawrence, Kansas. Ten fields each of wheat, corn, and soybeans were monitored over the greater part of their growing seasons. The tabulated data summarize measurements made by the sensor systems and represent target characteristics. Target parameters describing the vegetation and soil characteristics include plant moisture, density, height, and growth stage, as well as soil moisture and soil-bulk density. Complete listings of pertinent crop-canopy and soil measurements are given.
Majnarić-Trtica, Ljiljana; Vitale, Branko
2011-10-01
To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.
NASA Astrophysics Data System (ADS)
Tatomirescu, Dragos; d'Humieres, Emmanuel; Vizman, Daniel
2017-12-01
The necessity to produce superior quality ion and electron beams has been a hot research field due to the advances in laser science in the past decade. This work focuses on the parametric study of different target density profiles in order to determine their effect on the spatial distribution of the accelerated particle beam, the particle maximum energy, and the electromagnetic field characteristics. For the scope of this study, the laser pulse parameters were kept constant, while varying the target parameters. The study continues the work published in [1] and focuses on further studying the effects of target curvature coupled with a cone laser focusing structure. The results show increased particle beam focusing and a significant enhancement in particle maximum energy.
Constraints on the pre-impact orbits of Solar system giant impactors
NASA Astrophysics Data System (ADS)
Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.
2018-03-01
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
Choice of modality with the use of high-performance membrane and evaluation for clinical effects.
Masakane, Ikuto
2011-01-01
The golden target for dialysis therapy should be to guarantee longer survival and to give a higher quality of life without dialysis-related complications. In order to achieve this target, the choice of dialysis modality and membrane is essential but we have not yet established what the best choice for a dialysis modality and membrane are. Generally, we choose a dialysis modality for better solute removal and better biocompatibility. In this issue we would like to propose that the patients' preference for dialysis therapy is a useful parameter in prescribing the dialysis modality. In our recent experience, chronic dialysis patients have had preferences on a dialysis modality and membrane, those being PMMA, EVAL, AN-69 and pre-dilution online HDF. These modalities could relieve them of uncomfortable dialysis-related symptoms such as insomnia, itchiness, irritability, and so on. Other characteristics of these modalities are of a nutritional advantage, a broad removal pattern of uremic toxins including low-molecular-weight protein and protein-bound uremic toxins, and good biocompatibility free from chemical components of dialysis membrane. In conclusion, patients' symptoms could be a useful parameter to choose a dialysis modality and membrane. Copyright © 2011 S. Karger AG, Basel.
Quadrant III RFI draft report: Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-01
The purpose of the RCRA Facility Investigation (RFI) at The Portsmouth Gaseous Diffusion Plant (PORTS) is to acquire, analyze and interpret data that will: characterize the environmental setting, including ground water, surface water and sediment, soil and air; define and characterize sources of contamination; characterize the vertical and horizontal extent and degree of contamination of the environment; assess the risk to human health and the environment resulting from possible exposure to contaminants; and support the Corrective Measures Study (CMS), which will follow the RFI, if required. A total of 18 Solid Waste Management Units (SWMU's) were investigated. All surficial soilmore » samples (0--2 ft), sediment samples and surface-water samples proposed in the approved Quadrant III RFI Work Plan were collected as specified in the approved work plan and RFI Sampling Plan. All soil, sediment and surface-water samples were analyzed for parameters specified from the Target Compound List and Target Analyte List (TCL/TAL) as listed in the US EPA Statement of Work for Inorganic (7/88a) and Organic (2/88b) analyses for Soil and Sediment, and analyses for fluoride, Freon-113 and radiological parameters (total uranium, gross alpha, gross beta and technetium).« less
Quadrant III RFI draft report: Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-01
The purpose of the RCRA Facility Investigation (RFI) at The Portsmouth Gaseous Diffusion Plant (PORTS) is to acquire, analyze and interpret data that will: characterize the environmental setting, including ground water, surface water and sediment, soil and air; define and characterize sources of contamination; characterize the vertical and horizontal extent and degree of contamination of the environment; assess the risk to human health and the environment resulting from possible exposure to contaminants; and support the Corrective Measures Study (CMS), which will follow the RFI, if required. A total of 18 Solid Waste Management Units (SWMU`s) were investigated. All surficial soilmore » samples (0--2 ft), sediment samples and surface-water samples proposed in the approved Quadrant III RFI Work Plan were collected as specified in the approved work plan and RFI Sampling Plan. All soil, sediment and surface-water samples were analyzed for parameters specified from the Target Compound List and Target Analyte List (TCL/TAL) as listed in the US EPA Statement of Work for Inorganic (7/88a) and Organic (2/88b) analyses for Soil and Sediment, and analyses for fluoride, Freon-113 and radiological parameters (total uranium, gross alpha, gross beta and technetium).« less
Characterization and optimization of the HyperV PLX- α coaxial-gun plasma jet
NASA Astrophysics Data System (ADS)
Case, Andrew; Brockington, Sam; Cruz, Edward; Witherspoon, F. Douglas
2017-10-01
We present results from characterizing and optimizing performance of the contoured gap coaxial plasma guns under development for the ARPA-E Accelerating Low-Cost Plasma Heating And Assembly (ALPHA) program. Plasma jet diagnostics include fast photodiodes for velocimetry and interferometry for line integrated density. Additionally we present results from spectroscopy, both time resolved high resolution spectroscopy using a novel detector and time integrated survey spectroscopy, for measurements of velocity and temperature as well as impurities. Fast imaging gives plume geometry and time integrated imaging gives overall light emission. Results from a novel long record length camera developed by HyperV will also be presented. Experimental results are compared to the desired target parameters for the plasma jets. The target values for the plasmoid are velocity of 50 km/s, mass of 3.5 mg, and length of 10 cm. The best results so far from the exploration of parameter space for gun operation are: 4 mg at >50 km/s, with a length of 10 cm. Peak axial density 34 cm downstream from the muzzle is 2 ×1016 cm-3. This work supported by the ARPA-E ALPHA Program under contract DE-AR0000566.
Pricise Target Geolocation and Tracking Based on Uav Video Imagery
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.
Distinctive Behaviors of Druggable Proteins in Cellular Networks
Workman, Paul; Al-Lazikani, Bissan
2015-01-01
The interaction environment of a protein in a cellular network is important in defining the role that the protein plays in the system as a whole, and thus its potential suitability as a drug target. Despite the importance of the network environment, it is neglected during target selection for drug discovery. Here, we present the first systematic, comprehensive computational analysis of topological, community and graphical network parameters of the human interactome and identify discriminatory network patterns that strongly distinguish drug targets from the interactome as a whole. Importantly, we identify striking differences in the network behavior of targets of cancer drugs versus targets from other therapeutic areas and explore how they may relate to successful drug combinations to overcome acquired resistance to cancer drugs. We develop, computationally validate and provide the first public domain predictive algorithm for identifying druggable neighborhoods based on network parameters. We also make available full predictions for 13,345 proteins to aid target selection for drug discovery. All target predictions are available through canSAR.icr.ac.uk. Underlying data and tools are available at https://cansar.icr.ac.uk/cansar/publications/druggable_network_neighbourhoods/. PMID:26699810
Kaiser, Ulrike; Sabatowski, Rainer; Balck, Friedrich
2017-08-01
The assessment of treatment effectiveness in public health settings is ensured by indicators that reflect the changes caused by specific interventions. These indicators are also applied in benchmarking systems. The selection of constructs should be guided by their relevance for affected patients (patient reported outcomes). The interdisciplinary multimodal pain therapy (IMPT) is a complex intervention based on a biopsychosocial understanding of chronic pain. For quality assurance purposes, psychological parameters (depression, general anxiety, health-related quality of life) are included in standardized therapy assessment in pain medicine (KEDOQ), which can also be used for comparative analyses in a benchmarking system. The aim of the present study was to investigate the relevance of depressive symptoms, general anxiety and mental quality of life in patients undergoing IMPT under real life conditions. In this retrospective, one-armed and exploratory observational study we used secondary data of a routine documentation of IMST in routine care, applying several variables of the German Pain Questionnaire and the facility's comprehensive basic documentation. 352 participants with IMPT (from 2006 to 2010) were included, and the follow-up was performed over two years with six assessments. Because of statistically heterogeneous characteristics a complex analysis consisting of factor and cluster analyses was applied to build subgroups. These subgroups were explored to identify differences in depressive symptoms (HADS-D), general anxiety (HADS-A), and mental quality of life (SF 36 PSK) at the time of therapy admission and their development estimated by means of effect sizes. Analyses were performed using SPSS 21.0®. Six subgroups were derived and mainly proved to be clinically and psychologically normal, with the exception of one subgroup that consistently showed psychological impairment for all three parameters. The follow-up of the total study population revealed medium or large effects; changes in the subgroups were consistently caused by two subgroups, while the other four showed little or no change. In summary, only a small proportion of the target population (20 %) demonstrated clinically relevant scores in the psychological parameters applied. When selecting indicators for quality assurance, the heterogeneity of the target populations as well as conceptual and methodological aspects should be considered. The characteristics of the parameters intended, along with clinical and personal relevance of indicators for patients, should be investigated by specific procedures such as patient surveys and statistical analyses. Copyright © 2017. Published by Elsevier GmbH.
Optimization of process parameters for RF sputter deposition of tin-nitride thin-films
NASA Astrophysics Data System (ADS)
Jangid, Teena; Rao, G. Mohan
2018-05-01
Radio frequency Magnetron sputtering technique was employed to deposit Tin-nitride thin films on Si and glass substrate at different process parameters. Influence of varying parameters like substrate temperature, target-substrate distance and RF power is studied in detail. X-ray diffraction method is used as a key technique for analyzing the changes in the stoichiometric and structural properties of the deposited films. Depending on the combination of deposition parameters, crystalline as well as amorphous films were obtained. Pure tin-nitride thin films were deposited at 15W RF power and 600°C substrate temperature with target-substrate distance fixed at 10cm. Bandgap value of 1.6 eV calculated for the film deposited at optimum process conditions matches well with reported values.
Wang, Hongyuan; Zhang, Wei; Dong, Aotuo
2012-11-10
A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.
General strategy for the protection of organs at risk in IMRT therapy of a moving body
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abolfath, Ramin M.; Papiez, Lech
2009-07-15
We investigated protection strategies of organs at risk (OARs) in intensity modulated radiation therapy (IMRT). These strategies apply to delivery of IMRT to moving body anatomies that show relative displacement of OAR in close proximity to a tumor target. We formulated an efficient genetic algorithm which makes it possible to search for global minima in a complex landscape of multiple irradiation strategies delivering a given, predetermined intensity map to a target. The optimal strategy was investigated with respect to minimizing the dose delivered to the OAR. The optimization procedure developed relies on variability of all parameters available for control ofmore » radiation delivery in modern linear accelerators, including adaptation of leaf trajectories and simultaneous modification of beam dose rate during irradiation. We showed that the optimization algorithms lead to a significant reduction in the dose delivered to OAR in cases where organs at risk move relative to a treatment target.« less
Huang, Xiaojia; Lin, Jianbin; Yuan, Dongxing; Hu, Rongzong
2009-04-17
In this study, a simple and rapid method was developed for the determination of seven steroid hormones in wastewater. Sample preparation and analysis were performed by stir bar sorptive extraction (SBSE) based on poly(vinylpyridine-ethylene dimethacrylate) monolithic material (SBSEM) combined with high-performance liquid chromatography with diode array detection. To achieve the optimum extraction performance, several main parameters, including extraction and desorption time, pH value and contents of inorganic salt in the sample matrix, were investigated. Under the optimized experimental conditions, the method showed good linearity and repeatability, as well as advantages such as sensitivity, simplicity, low cost and high feasibility. The extraction performance of SBSEM to the target compounds also compared with commercial SBSE which used polydimethylsiloxane as coating. Finally, the proposed method was successfully applied to the determination of the target compounds in wastewater samples. The recoveries of spiked target compounds in real samples ranged from 48.2% to 110%.
SHOP: a method for structure-based fragment and scaffold hopping.
Fontaine, Fabien; Cross, Simon; Plasencia, Guillem; Pastor, Manuel; Zamora, Ismael
2009-03-01
A new method for fragment and scaffold replacement is presented that generates new families of compounds with biological activity, using GRID molecular interaction fields (MIFs) and the crystal structure of the targets. In contrast to virtual screening strategies, this methodology aims only to replace a fragment of the original molecule, maintaining the other structural elements that are known or suspected to have a critical role in ligand binding. First, we report a validation of the method, recovering up to 95% of the original fragments searched among the top-five proposed solutions, using 164 fragment queries from 11 diverse targets. Second, six key customizable parameters are investigated, concluding that filtering the receptor MIF using the co-crystallized ligand atom type has the greatest impact on the ranking of the proposed solutions. Finally, 11 examples using more realistic scenarios have been performed; diverse chemotypes are returned, including some that are similar to compounds that are known to bind to similar targets.
NASA Technical Reports Server (NTRS)
Everett, L.
1992-01-01
This report documents the performance characteristics of a Targeting Reflective Alignment Concept (TRAC) sensor. The performance will be documented for both short and long ranges. For long ranges, the sensor is used without the flat mirror attached to the target. To better understand the capabilities of the TRAC based sensors, an engineering model is required. The model can be used to better design the system for a particular application. This is necessary because there are many interrelated design variables in application. These include lense parameters, camera, and target configuration. The report presents first an analytical development of the performance, and second an experimental verification of the equations. In the analytical presentation it is assumed that the best vision resolution is a single pixel element. The experimental results suggest however that the resolution is better than 1 pixel. Hence the analytical results should be considered worst case conditions. The report also discusses advantages and limitations of the TRAC sensor in light of the performance estimates. Finally the report discusses potential improvements.
MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, A
2016-06-15
Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less
Heterogeneous Defensive Naval Weapon Assignment To Swarming Threats In Real Time
2016-03-01
threat Damage potential of target t if it hits the ship [integer from 0 to 3] _ ttarget phit Probability that target t hits the ship [probability...secondary weapon systems on target t [integer] _ tsec phit Probability that secondary weapon systems launched from target t hit the ship...pairing. These parameters are calculated as follows: 310 _ _t t tpriority target threat target phit = × × (3.1) 3_ 10 _ _t t tsec priority sec
Computational design of short pulse laser driven iron opacity experiments
Martin, M. E.; London, R. A.; Goluoglu, S.; ...
2017-02-23
Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less
Computational design of short pulse laser driven iron opacity experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M. E.; London, R. A.; Goluoglu, S.
Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less
Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A
2016-01-01
Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfrate, A; Farah, J; Sayah, R
2015-06-15
Purpose: Development of a parametric equation suitable for a daily use in routine clinic to provide estimates of stray neutron doses in proton therapy. Methods: Monte Carlo (MC) calculations using the UF-NCI 1-year-old phantom were exercised to determine the variation of stray neutron doses as a function of irradiation parameters while performing intracranial treatments. This was done by individually changing the proton beam energy, modulation width, collimator aperture and thickness, compensator thickness and the air gap size while their impact on neutron doses were put into a single equation. The variation of neutron doses with distance from the target volumemore » was also included in it. Then, a first step consisted in establishing the fitting coefficients by using 221 learning data which were neutron absorbed doses obtained with MC simulations while a second step consisted in validating the final equation. Results: The variation of stray neutron doses with irradiation parameters were fitted with linear, polynomial, etc. model while a power-law model was used to fit the variation of stray neutron doses with the distance from the target volume. The parametric equation fitted well MC simulations while establishing fitting coefficients as the discrepancies on the estimate of neutron absorbed doses were within 10%. The discrepancy can reach ∼25% for the bladder, the farthest organ from the target volume. Finally, the validation showed results in compliance with MC calculations since the discrepancies were also within 10% for head-and-neck and thoracic organs while they can reach ∼25%, again for pelvic organs. Conclusion: The parametric equation presents promising results and will be validated for other target sites as well as other facilities to go towards a universal method.« less
Barlow, Steven M; Hozan, Mohsen; Lee, Jaehoon; Greenwood, Jake; Custead, Rebecca; Wardyn, Brianna; Tippin, Kaytlin
2018-04-27
The relation among several parameters of the ramp-and-hold isometric force contraction (peak force and dF/dt max during the initial phase of force recruitment, and the proportion of hold-phase at target) was quantified for the right and left thumb-index finger pinch, and lower lip midline compression in 40 neurotypical right-handed young adults (20 female/20 males) using wireless force sensors and data acquisition technology developed in our laboratory. In this visuomotor control task, participants produced ramp-and-hold isometric forces as 'rapidly and accurately' as possible to end-point target levels at 0.25, 0.5, 1 and 2 Newtons presented to a computer monitor in a randomized block design. Significant relations were found between the parameters of the ramp-and-hold lip force task and target force level, including the peak rate of force change (dF/dt max ), peak force, and the criterion percentage of force within ±5% of target during the contraction hold phase. A significant performance advantage was found among these force variables for the thumb-index finger over the lower lip. The maximum voluntary compression force (MVCF) task revealed highly significant differences in force output between the thumb-index fingers and lower lip (∼4.47-4.70 times greater for the digits versus lower lip), a significant advantage of the right thumb-index finger over the non-dominant left thumb-index finger (12% and 25% right hand advantage for males and females, respectively), and a significant sex difference (∼1.65-1.73 times greater among males). Copyright © 2018 Elsevier Ltd. All rights reserved.
Capsule Performance Optimization for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Landen, Otto
2009-11-01
The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.
The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.
Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro
2018-03-01
Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.
Precision grid and hand motion for accurate needle insertion in brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.
2011-08-15
Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on themore » measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.« less
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
Differential Evolution Optimization for Targeting Spacecraft Maneuver Plans
NASA Technical Reports Server (NTRS)
Mattern, Daniel
2016-01-01
Previous analysis identified specific orbital parameters as being safer for conjunction avoidance for the TDRS fleet. With TDRS-9 being considered an at-risk spacecraft, a potential conjunction concern was raised should TDRS-9 fail while at a longitude of 12W. This document summarizes the analysis performed to identify if these specific orbital parameters could be targeted using the remaining drift-termination maneuvers for the relocation of TDRS-9 from 41W longitude to 12W longitude.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.
NASA Astrophysics Data System (ADS)
Vitelaru, Catalin; Aijaz, Asim; Constantina Parau, Anca; Kiss, Adrian Emil; Sobetkii, Arcadie; Kubart, Tomas
2018-04-01
Pressure and target voltage driven discharge runaway from low to high discharge current density regimes in high power impulse magnetron sputtering of carbon is investigated. The main purpose is to provide a meaningful insight of the discharge dynamics, with the ultimate goal to establish a correlation between discharge properties and process parameters to control the film growth. This is achieved by examining a wide range of pressures (2–20 mTorr) and target voltages (700–850 V) and measuring ion saturation current density at the substrate position. We show that the minimum plasma impedance is an important parameter identifying the discharge transition as well as establishing a stable operating condition. Using the formalism of generalized recycling model, we introduce a new parameter, ‘recycling ratio’, to quantify the process gas recycling for specific process conditions. The model takes into account the ion flux to the target, the amount of gas available, and the amount of gas required for sustaining the discharge. We show that this parameter describes the relation between the gas recycling and the discharge current density. As a test case, we discuss the pressure and voltage driven transitions by changing the gas composition when adding Ne into the discharge. We propose that standard Ar HiPIMS discharges operated with significant gas recycling do not require Ne to increase the carbon ionization.
NASA Astrophysics Data System (ADS)
McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.
2016-12-01
Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; El–Nagdy, M. S.; Badawy, B. M.; Osman, W.; Fayed, M.
2016-06-01
The grey particle production following 60 A and 200A GeV 16O interactions with emulsion nuclei is investigated at different centralities. The evaporated target fragment multiplicity is voted as a centrality parameter. The target size effect is examined over a wide range, where the C, N and O nuclei present the light target group while the Br and Ag nuclei are the heavy group. In the framework of the nuclear limiting fragmentation hypothesis, the grey particle multiplicity characteristics depend only on the target size and centrality while the projectile size and energy are not effective. The grey particle is suggested to be a multisource production system. The emission direction in the 4π space depends upon the production source. Either the exponential decay or the Poisson’s peaking curves are the usual characteristic shapes of the grey particle multiplicity distributions. The decay shape is suggested to be a characteristic feature of the source singularity while the peaking shape is a multisource super-position. The sensibility to the centrality varies from a source to other. The distribution shape is identified at each centrality region according to the associated source contribution. In general, the multiplicity characteristics seem to be limited w.r.t. the collision system centrality using light target nuclei. The selection of the black particle multiplicity as a centrality parameter is successful through the collision with the heavy target nuclei. In the collision with the light target nuclei it may be qualitatively better to vote another centrality parameter.
[Model and analysis of spectropolarimetric BRDF of painted target based on GA-LM method].
Chen, Chao; Zhao, Yong-Qiang; Luo, Li; Pan, Quan; Cheng, Yong-Mei; Wang, Kai
2010-03-01
Models based on microfacet were used to describe spectropolarimetric BRDF (short for bidirectional reflectance distribution function) with experimental data. And the spectropolarimetric BRDF values of targets were measured with the comparison to the standard whiteboard, which was considered as Lambert and had a uniform reflectance rate up to 98% at arbitrary angle of view. And then the relationships between measured spectropolarimetric BRDF values and the angles of view, as well as wavelengths which were in a range of 400-720 nm were analyzed in details. The initial value needed to be input to the LM optimization method was difficult to get and greatly impacted the results. Therefore, optimization approach which combines genetic algorithm and Levenberg-Marquardt (LM) was utilized aiming to retrieve parameters of nonlinear models, and the initial values were obtained using GA approach. Simulated experiments were used to test the efficiency of the adopted optimization method. And the simulated experiment ensures the optimization method to have a good performance and be able to retrieve the parameters of nonlinear model efficiently. The correctness of the models was validated by real outdoor sampled data. The parameters of DoP model retrieved are the refraction index of measured targets. The refraction index of the same color painted target but with different materials was also obtained. Conclusion has been drawn that the refraction index from these two targets are very near and this slight difference could be understood by the difference in the conditions of paint targets' surface, not the material of the targets.
Kehrl, W; Sonnemann, U
1998-09-01
Controlled clinical studies on medical treatment of rhinitis sicca anterior have not yet been published. Therapy recommendations are based on experiences but not on results of controlled clinical studies. The aim of this study was to examine the efficacy and tolerance of a new form of application of Dexpanthenol in physiologic saline solution (Nasicur). A randomized comparison of parallel groups was performed. One group was treated with the nasal spray while the control group received a placebo. The assessment of nasal breathing resistance and the extent of crust formation according to scores were defined as target parameters. Statistical analysis was carried out according to Wilcoxon at alpha < or = 0.05. Forty-eight outpatients diagnosed with rhinitis sicca anterior were included in this study. Twenty-four received the medication, and 29 were treated with a placebo. The superiority of the dexpanthenol nasal spray in comparison to the placebo medication was demonstrated for both target parameters as clinically relevant and statistically significant. The placebo spray showed clinical improvement of the other treatment outcome parameters. Dexpanthenol nasal spray showed no statistically significant difference in comparison to placebo. The clinically proven efficacy is emphasized by good tolerance of both treatments which was validated by the objective rhinoscopy findings. Good compliance was confirmed. The result of the controlled clinical study confirms that the dexpanthenol nasal spray is an effective medicinal treatment of rhinitis sicca anterior and is more effective than common medications.
Development and dosimetry of a small animal lung irradiation platform
McGurk, Ross; Hadley, Caroline; Jackson, Isabel L.; Vujaskovic, Zeljko
2015-01-01
Advances in large scale screening of medical counter measures for radiation-induced normal tissue toxicity are currently hampered by animal irradiation paradigms that are both inefficient and highly variable among institutions. Here, we introduce a novel high-throughput small animal irradiation platform for use in orthovoltage small animal irradiators. We used radiochromic film and metal oxide semiconductor field effect transistor detectors to examine several parameters, including 2D field uniformity, dose rate consistency, and shielding transmission. We posit that this setup will improve efficiency of drug screens by allowing for simultaneous, targeted irradiation of multiple animals, improving efficiency within a single institution. Additionally, we suggest that measurement of the described parameters in all centers conducting counter measure studies will improve the translatability of findings among institutions. We also investigated the use of tissue equivalent phantoms in performing dosimetry measurements for small animal irradiation experiments. Though these phantoms are commonly used in dosimetry, we recorded a significant difference in both the entrance and target tissue dose rates between euthanized rats and mice with implanted detectors and the corresponding phantom measurement. This suggests that measurements using these phantoms may not provide accurate dosimetry for in vivo experiments. Based on these measurements, we propose that this small animal irradiation platform can increase the capacity of animal studies by allowing for more efficient animal irradiation. We also suggest that researchers fully characterize the parameters of whatever radiation setup is in use in order to facilitate better comparison among institutions. PMID:23091878
Bakker, E J; Ravensbergen, N J; Voute, M T; Hoeks, S E; Chonchol, M; Klimek, M; Poldermans, D
2011-09-01
This article describes the rationale and design of the DECREASE-XIII trial, which aims to evaluate the potential of esmolol infusion, an ultra-short-acting beta-blocker, during surgery as an add-on to chronic low-dose beta-blocker therapy to maintain perioperative haemodynamic stability during major vascular surgery. A double-blind, placebo-controlled, randomised trial. A total of 260 vascular surgery patients will be randomised to esmolol or placebo as an add-on to standard medical care, including chronic low-dose beta-blockers. Esmolol is titrated to maintain a heart rate within a target window of 60-80 beats per minute for 24 h from the induction of anaesthesia. Heart rate and ischaemia are assessed by continuous 12-lead electrocardiographic monitoring for 72 h, starting 1 day prior to surgery. The primary outcome measure is duration of heart rate outside the target window during infusion of the study drug. Secondary outcome measures will be the efficacy parameters of occurrence of cardiac ischaemia, troponin T release, myocardial infarction and cardiac death within 30 days after surgery and safety parameters such as the occurrence of stroke and hypotension. This study will provide data on the efficacy of esmolol titration in chronic beta-blocker users for tight heart-rate control and reduction of ischaemia in patients undergoing vascular surgery as well as data on safety parameters. Copyright © 2011 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Quantitative imaging as cancer biomarker
NASA Astrophysics Data System (ADS)
Mankoff, David A.
2015-03-01
The ability to assay tumor biologic features and the impact of drugs on tumor biology is fundamental to drug development. Advances in our ability to measure genomics, gene expression, protein expression, and cellular biology have led to a host of new targets for anticancer drug therapy. In translating new drugs into clinical trials and clinical practice, these same assays serve to identify patients most likely to benefit from specific anticancer treatments. As cancer therapy becomes more individualized and targeted, there is an increasing need to characterize tumors and identify therapeutic targets to select therapy most likely to be successful in treating the individual patient's cancer. Thus far assays to identify cancer therapeutic targets or anticancer drug pharmacodynamics have been based upon in vitro assay of tissue or blood samples. Advances in molecular imaging, particularly PET, have led to the ability to perform quantitative non-invasive molecular assays. Imaging has traditionally relied on structural and anatomic features to detect cancer and determine its extent. More recently, imaging has expanded to include the ability to image regional biochemistry and molecular biology, often termed molecular imaging. Molecular imaging can be considered an in vivo assay technique, capable of measuring regional tumor biology without perturbing it. This makes molecular imaging a unique tool for cancer drug development, complementary to traditional assay methods, and a potentially powerful method for guiding targeted therapy in clinical trials and clinical practice. The ability to quantify, in absolute measures, regional in vivo biologic parameters strongly supports the use of molecular imaging as a tool to guide therapy. This review summarizes current and future applications of quantitative molecular imaging as a biomarker for cancer therapy, including the use of imaging to (1) identify patients whose tumors express a specific therapeutic target; (2) determine whether the drug reaches the target; (3) identify an early response to treatment; and (4) predict the impact of therapy on long-term outcomes such as survival. The manuscript reviews basic concepts important in the application of molecular imaging to cancer drug therapy, in general, and will discuss specific examples of studies in humans, and highlight future directions, including ongoing multi-center clinical trials using molecular imaging as a cancer biomarker.
Burgess, David S; Hall, Ronald G
2007-07-01
Until the 2002 approval of levofloxacin 750 mg QD, ciprofloxacin was the fluoroquinolone of choice against Pseudomonas aeruginosa infections. This study evaluated the AUC:MIC ratios for ciprofloxacin 400 mg BID and TID and levofloxacin 750 mg QD, all administered intravenously, against P. aeruginosa using a Monte Carlo simulation. Pharmacokinetic data for ciprofloxacin and levofloxacin and 2002 MIC distributions against P. aeruginosa were obtained from studies in healthy volunteers published in the peer-reviewed literature. Pharmacokinetic studies of each agent were identified by separate MEDLINE searches combining the MeSH heading pharmacokinetics with the generic name of the antimicrobial. Only human studies published in English between 1990 and 2001 were included. Included studies also had to meet 3 minimum criteria: evaluation of clinically relevant dosing regimens, use of rigorous study methods, and provision of mean (SD) values for the pharmacokinetic parameters of interest. When multiple studies met these criteria, a single study was selected for each antimicrobial regimen. Pharmacodynamic analysis was performed using a Monte Carlo simulation of 10,000 patients by integrating the pharmacokinetic parameters, their variability, and 2002 MIC distributions for each antimicrobial regimen. The probability of target attainment was determined for each regimen for an AUC:MIC ratio from 0 to 300. A > or =90% probability of target attainment was considered satisfactory. For ciprofloxacin 400 mg TID and levofloxacin 750 mg QD, the AUC:MIC ratio at the corresponding 2002 Clinical Laboratory Standards Institute break points of 1 and 2 microg/mL were 33 and 34, respectively. The probabilities of target attainment for a free AUC:MIC ratio >90 (equivalent to a total AUC:MIC ratio > or =125) were 47% for ciprofloxacin 400 mg BID, 54% for ciprofloxacin 400 mg TID, and 48% for levofloxacin 750 mg QD. When pharmacokinetic data from healthy volunteers and 2002 MIC data were used, none of the simulated fluoroquinolone regimens achieved a high likelihood of target attainment against P. aeruginosa.
Gould, Paul A; Booth, Cameron; Dauber, Kieran; Ng, Kevin; Claughton, Andrew; Kaye, Gerald C
2016-12-01
This study sought to investigate specific contact force (CF) parameters to guide cavotricuspid isthmus (CTI) ablation and compare the outcome with a historical control cohort. Patients (30) undergoing CTI ablation were enrolled prospectively in the Study cohort and compared with a retrospective Control cohort of 30 patients. Ablation in the Study cohort was performed using CF parameters >10 g and <40 g and a Force Time Integral (FTI) of 800 ± 10 g. The Control cohort underwent traditionally guided CTI ablation. Traditional parameters (electrogram and impedance change) were assessed in both cohorts. All ablations regardless of achieving targets were included in data analysis. Bidirectional CTI block was achieved in all of the Study and 27 of the Control cohort. Atrial flutter recurred in 3 (10%) patients (follow-up 564 ± 212 days) in the study cohort and in 3 (10%) patients (follow-up 804 ± 540 days) in the Control cohort. There were no major complications in either cohort. Traditional parameters correlated poorly with CF parameters. In the Study cohort, flutter recurrence was associated with significantly lower FTI and ablation duration, but was not associated with total average CF. CTI ablation can be safely performed using CF parameters guiding ablation, with similar long-term results to a historical ablation control group. Potentially CF parameters may provide adjunctive information to enable a more efficient CTI ablation. Further research is required to confirm this. © 2016 Wiley Periodicals, Inc.
van den Broek, Marcel P H; Groenendaal, Floris; Egberts, Antoine C G; Rademaker, Carin M A
2010-05-01
Examples of clinical applications of therapeutic hypothermia in modern clinical medicine include traumatic cardiac arrest, ischaemic stroke and, more recently, acute perinatal asphyxia in neonates. The exact mechanism of (neuro)protection by hypothermia is unknown. Since most enzymatic processes exhibit temperature dependency, it can be expected that therapeutic hypothermia may cause alterations in both pharmacokinetic and pharmacodynamic parameters, which could result in an increased risk of drug toxicity or therapy failure. Generalizable knowledge about the effect of therapeutic hypothermia on pharmacokinetics and pharmacodynamics could lead to more appropriate dosing and thereby prediction of clinical effects. This article reviews the evidence on the influence of therapeutic hypothermia on individual pharmacokinetic and pharmacodynamic parameters. A literature search was conducted within the PubMed, Embase and Cochrane databases from January 1965 to September 2008, comparing pharmacokinetic and/or pharmacodynamic parameters in hypothermia and normothermia regarding preclinical (animal) and clinical (human) studies. During hypothermia, pharmacokinetic parameters alter, resulting in drug and metabolite accumulation in the plasma for the majority of drugs. Impaired clearance is the most striking effect. Based on impaired clearance, dosages should be decreased considerably, especially for drugs with a low therapeutic index. Hypothetically, high-clearance compounds are affected more than low-clearance compounds because of the additional effect of impaired hepatic blood flow. The volume of distribution also changes, which may lead to therapy failure when it increases and could lead to toxicity when it decreases. The pH-partitioning hypothesis could contribute to the changes in the volumes of distribution for weak bases and acids, depending on their acid dissociation constants and acid-base status. Pharmacodynamic parameters may also alter, depending on the hypothermic regimen, drug target location, pharmacological mechanism and metabolic pathway of inactivation. The pharmacological response changes when target sensitivity alters. Rewarming patients to normothermia can also result in toxicity or therapy failure. The integrated effect of hypothermia on pharmacokinetic and pharmacodynamic properties of individual drugs is unclear. Therefore, therapeutic drug monitoring is currently considered essential for drugs with a low therapeutic index, drugs with active metabolites, high-clearance compounds and drugs that are inactivated by enzymes at the site of effect. Because most of the studies (74%) included in this review contain preclinical data, clinical pharmacokinetic/pharmacodynamic studies are essential for the development of substantiated dose regimens to avoid toxicity and therapy failure in patients treated with hypothermia.
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Arvidson, R. E.; Guinness, E. A.; Wolff, M. J.
2004-12-01
The Mars Exploration Rover (MER) Panoramic Camera (Pancam) observation strategy included the acquisition of multispectral data sets specifically designed to support the photometric analysis of Martian surface materials (J. R. Johnson, this conference). We report on the numerical inversion of observed Pancam radiance-on-sensor data to determine the best-fit surface bidirectional reflectance parameters as defined by Hapke theory. The model bidirectional reflectance parameters for the Martian surface provide constraints on physical and material properties and allow for the direct comparison of Pancam and orbital data sets. The parameter optimization procedure consists of a spatial multigridding strategy driving a Levenberg-Marquardt nonlinear least squares optimization engine. The forward radiance models and partial derivatives (via finite-difference approximation) are calculated using an implementation of the DIScrete Ordinate Radiative Transfer (DISORT) algorithm with the four-parameter Hapke bidirectional reflectance function and the two-parameter Henyey-Greenstein phase function defining the lower boundary. The DISORT implementation includes a plane-parallel model of the Martian atmosphere derived from a combination of Thermal Emission Spectrometer (TES), Pancam, and Mini-TES atmospheric data acquired near in time to the surface observations. This model accounts for bidirectional illumination from the attenuated solar beam and hemispherical-directional skylight illumination. The initial investigation was limited to treating the materials surrounding the rover as a single surface type, consistent with the spatial resolution of orbital observations. For more detailed analyses the observation geometry can be calculated from the correlation of Pancam stereo pairs (J. M. Soderblom et al., this conference). With improved geometric control, the radiance inversion can be applied to constituent surface material classes such as ripple and dune forms in addition to the soils on the Meridiani plain. Under the assumption of a Henyey-Greenstein phase function, initial results for the Opportunity site suggest a single scattering albedo on the order of 0.25 and a Henyey-Greenstein forward fraction approaching unity at an effective wavelength of 753 nm. As an extension of the photometric modeling, the radiance inversion also provides a means of calculating surface reflectance independent of the radiometric calibration target. This method for determining observed reflectance will provide an additional constraint on the dust deposition model for the calibration target.
Atomistic Models of General Anesthetics for Use in in Silico Biological Studies
2015-01-01
While small molecules have been used to induce anesthesia in a clinical setting for well over a century, a detailed understanding of the molecular mechanism remains elusive. In this study, we utilize ab initio calculations to develop a novel set of CHARMM-compatible parameters for the ubiquitous modern anesthetics desflurane, isoflurane, sevoflurane, and propofol for use in molecular dynamics (MD) simulations. The parameters generated were rigorously tested against known experimental physicochemical properties including dipole moment, density, enthalpy of vaporization, and free energy of solvation. In all cases, the anesthetic parameters were able to reproduce experimental measurements, signifying the robustness and accuracy of the atomistic models developed. The models were then used to study the interaction of anesthetics with the membrane. Calculation of the potential of mean force for inserting the molecules into a POPC bilayer revealed a distinct energetic minimum of 4–5 kcal/mol relative to aqueous solution at the level of the glycerol backbone in the membrane. The location of this minimum within the membrane suggests that anesthetics partition to the membrane prior to binding their ion channel targets, giving context to the Meyer–Overton correlation. Moreover, MD simulations of these drugs in the membrane give rise to computed membrane structural parameters, including atomic distribution, deuterium order parameters, dipole potential, and lateral stress profile, that indicate partitioning of anesthetics into the membrane at the concentration range studied here, which does not appear to perturb the structural integrity of the lipid bilayer. These results signify that an indirect, membrane-mediated mechanism of channel modulation is unlikely. PMID:25303275
Estimation of marginal costs at existing waste treatment facilities.
Martinez-Sanchez, Veronica; Hulgaard, Tore; Hindsgaul, Claus; Riber, Christian; Kamuk, Bettina; Astrup, Thomas F
2016-04-01
This investigation aims at providing an improved basis for assessing economic consequences of alternative Solid Waste Management (SWM) strategies for existing waste facilities. A bottom-up methodology was developed to determine marginal costs in existing facilities due to changes in the SWM system, based on the determination of average costs in such waste facilities as function of key facility and waste compositional parameters. The applicability of the method was demonstrated through a case study including two existing Waste-to-Energy (WtE) facilities, one with co-generation of heat and power (CHP) and another with only power generation (Power), affected by diversion strategies of five waste fractions (fibres, plastic, metals, organics and glass), named "target fractions". The study assumed three possible responses to waste diversion in the WtE facilities: (i) biomass was added to maintain a constant thermal load, (ii) Refused-Derived-Fuel (RDF) was included to maintain a constant thermal load, or (iii) no reaction occurred resulting in a reduced waste throughput without full utilization of the facility capacity. Results demonstrated that marginal costs of diversion from WtE were up to eleven times larger than average costs and dependent on the response in the WtE plant. Marginal cost of diversion were between 39 and 287 € Mg(-1) target fraction when biomass was added in a CHP (from 34 to 303 € Mg(-1) target fraction in the only Power case), between -2 and 300 € Mg(-1) target fraction when RDF was added in a CHP (from -2 to 294 € Mg(-1) target fraction in the only Power case) and between 40 and 303 € Mg(-1) target fraction when no reaction happened in a CHP (from 35 to 296 € Mg(-1) target fraction in the only Power case). Although average costs at WtE facilities were highly influenced by energy selling prices, marginal costs were not (provided a response was initiated at the WtE to keep constant the utilized thermal capacity). Failing to systematically address and include costs in existing waste facilities in decision-making may unintendedly lead to higher overall costs at societal level. To avoid misleading conclusions, economic assessment of alternative SWM solutions should not only consider potential costs associated with alternative treatment but also include marginal costs associated with existing facilities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
Polarimetric LIDAR with FRI sampling for target characterization
NASA Astrophysics Data System (ADS)
Wijerathna, Erandi; Creusere, Charles D.; Voelz, David; Castorena, Juan
2017-09-01
Polarimetric LIDAR is a significant tool for current remote sensing applications. In addition, measurement of the full waveform of the LIDAR echo provides improved ranging and target discrimination, although, data storage volume in this approach can be problematic. In the work presented here, we investigated the practical issues related to the implementation of a full waveform LIDAR system to identify polarization characteristics of multiple targets within the footprint of the illumination beam. This work was carried out on a laboratory LIDAR testbed that features a flexible arrangement of targets and the ability to change the target polarization characteristics. Targets with different retardance characteristics were illuminated with a linearly polarized laser beam and the return pulse intensities were analyzed by rotating a linear analyzer polarizer in front of a high-speed detector. Additionally, we explored the applicability and the limitations of applying a sparse sampling approach based on Finite Rate of Innovations (FRI) to compress and recover the characteristic parameters of the pulses reflected from the targets. The pulse parameter values extracted by the FRI analysis were accurate and we successfully distinguished the polarimetric characteristics and the range of multiple targets at different depths within the same beam footprint. We also demonstrated the recovery of an unknown target retardance value from the echoes by applying a Mueller matrix system model.
NASA Astrophysics Data System (ADS)
Sosnin, A. N.; Shorin, V. S.
1989-10-01
Fast neutron cross-section measurements using quasimonoenergetic (p,n) neutron sources require the determination of the average neutron spectrum parameters such as the mean energy < E> and the variance D. In this paper a simple model has been considered for determining the < E>- andD-values. The approach takes into account the actual layout of the solid tritium target and the irradiated sample. It is valid for targets with a thickness of less than 1 mg/cm 2. It has been shown that the first and the second tritium distribution function moments < x> and < x2> are connected by simple analytical expressions with average characteristics of the neutron yield measured above the (p,n) reaction threshold energy. Our results are compared with accurate calculations for Sc-T targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru
2015-11-15
The possibility of the analysis and interpretation of the reported experiments with the megajoule National Ignition Facility (NIF) laser on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry has been studied. The problem of the energy balance in a target and the determination of the laser energy that should be used in the spherical model of the target has been considered. The results of action of pulses differing in energy and time profile (“low-foot” and “high-foot” regimes) have been analyzed. The parameters of the compression of targets with a high-densitymore » carbon ablator have been obtained. The results of the simulations are in satisfactory agreement with the measurements and correspond to the range of the observed parameters. The set of compared results can be expanded, in particular, for a more detailed determination of the parameters of a target near the maximum compression of the capsule. The physical foundation of the possibility of using the one-dimensional description is the necessity of the closeness of the last stage of the compression of the capsule to a one-dimensional process. The one-dimensional simulation of the compression of the capsule can be useful in establishing the boundary behind which two-dimensional and three-dimensional simulation should be used.« less
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
Simulation of the Focal Spot of the Accelerator Bremsstrahlung Radiation
NASA Astrophysics Data System (ADS)
Sorokin, V.; Bespalov, V.
2016-06-01
Testing of thick-walled objects by bremsstrahlung radiation (BR) is primarily performed via high-energy quanta. The testing parameters are specified by the focal spot size of the high-energy bremsstrahlung radiation. In determining the focal spot size, the high- energy BR portion cannot be experimentally separated from the low-energy BR to use high- energy quanta only. The patterns of BR focal spot formation have been investigated via statistical modeling of the radiation transfer in the target material. The distributions of BR quanta emitted by the target for different energies and emission angles under normal distribution of the accelerated electrons bombarding the target have been obtained, and the ratio of the distribution parameters has been determined.
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-05-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-01-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
NASA Astrophysics Data System (ADS)
Tel, Eyyup; Sahan, Muhittin; Alkanli, Hasancan; Sahan, Halide; Yigit, Mustafa
2017-09-01
In this study, the (n,α) nuclear reaction cross section was calculated for 41K target nuclei for neutron and proton density parameters using SKa, SKb, SLy5, and SLy6 Skyrme force. Theoretical cross section for the (n,α) nuclear reaction was obtained using a formula constituted by Tel et al. (2008). Results are compared with experimental data from EXFOR. The calculated results from formula was found in a close agreement with experimental data.
Critical Care Management Focused on Optimizing Brain Function After Cardiac Arrest.
Nakashima, Ryuta; Hifumi, Toru; Kawakita, Kenya; Okazaki, Tomoya; Egawa, Satoshi; Inoue, Akihiko; Seo, Ryutaro; Inagaki, Nobuhiro; Kuroda, Yasuhiro
2017-03-24
The discussion of neurocritical care management in post-cardiac arrest syndrome (PCAS) has generally focused on target values used for targeted temperature management (TTM). There has been less attention paid to target values for systemic and cerebral parameters to minimize secondary brain damage in PCAS. And the neurologic indications for TTM to produce a favorable neurologic outcome remain to be determined. Critical care management of PCAS patients is fundamental and essential for both cardiologists and general intensivists to improve neurologic outcome, because definitive therapy of PCAS includes both special management of the cause of cardiac arrest, such as coronary intervention to ischemic heart disease, and intensive management of the results of cardiac arrest, such as ventilation strategies to avoid brain ischemia. We reviewed the literature and the latest research about the following issues and propose practical care recommendations. Issues are (1) prediction of TTM candidate on admission, (2) cerebral blood flow and metabolism and target value of them, (3) seizure management using continuous electroencephalography, (4) target value of hemodynamic stabilization and its method, (5) management and analysis of respiration, (6) sedation and its monitoring, (7) shivering control and its monitoring, and (8) glucose management. We hope to establish standards of neurocritical care to optimize brain function and produce a favorable neurologic outcome.
Improved targeted immunization strategies based on two rounds of selection
NASA Astrophysics Data System (ADS)
Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping
2018-04-01
In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.
Lee, Kyoungyeul; Lee, Minho; Kim, Dongsup
2017-12-28
The identification of target molecules is important for understanding the mechanism of "target deconvolution" in phenotypic screening and "polypharmacology" of drugs. Because conventional methods of identifying targets require time and cost, in-silico target identification has been considered an alternative solution. One of the well-known in-silico methods of identifying targets involves structure activity relationships (SARs). SARs have advantages such as low computational cost and high feasibility; however, the data dependency in the SAR approach causes imbalance of active data and ambiguity of inactive data throughout targets. We developed a ligand-based virtual screening model comprising 1121 target SAR models built using a random forest algorithm. The performance of each target model was tested by employing the ROC curve and the mean score using an internal five-fold cross validation. Moreover, recall rates for top-k targets were calculated to assess the performance of target ranking. A benchmark model using an optimized sampling method and parameters was examined via external validation set. The result shows recall rates of 67.6% and 73.9% for top-11 (1% of the total targets) and top-33, respectively. We provide a website for users to search the top-k targets for query ligands available publicly at http://rfqsar.kaist.ac.kr . The target models that we built can be used for both predicting the activity of ligands toward each target and ranking candidate targets for a query ligand using a unified scoring scheme. The scores are additionally fitted to the probability so that users can estimate how likely a ligand-target interaction is active. The user interface of our web site is user friendly and intuitive, offering useful information and cross references.
Thermonuclear targets for direct-drive ignition by a megajoule laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bel’kov, S. A.; Bondarenko, S. V.; Vergunova, G. A.
2015-10-15
Central ignition of a thin two-layer-shell fusion target that is directly driven by a 2-MJ profiled pulse of Nd laser second-harmonic radiation has been studied. The parameters of the target were selected so as to provide effective acceleration of the shell toward the center, which was sufficient for the onset of ignition under conditions of increased hydrodynamic stability of the ablator acceleration and compression. The aspect ratio of the inner deuterium-tritium layer of the shell does not exceed 15, provided that a major part (above 75%) of the outer layer (plastic ablator) is evaporated by the instant of maximum compression.more » The investigation is based on two series of numerical calculations that were performed using one-dimensional (1D) hydrodynamic codes. The first 1D code was used to calculate the absorption of the profiled laser-radiation pulse (including calculation of the total absorption coefficient with allowance for the inverse bremsstrahlung and resonance mechanisms) and the spatial distribution of target heating for a real geometry of irradiation using 192 laser beams in a scheme of focusing with a cubo-octahedral symmetry. The second 1D code was used for simulating the total cycle of target evolution under the action of absorbed laser radiation and for determining the thermonuclear gain that was achieved with a given target.« less
Rejman, Marek; Bilewski, Marek; Szczepan, Stefan; Klarowicz, Andrzej; Rudnik, Daria; Maćkała, Krzysztof
2017-01-01
The aim of this study was to analyse changes taking place within selected kinematic parameters of the swimming start, after completing a six-week plyometric training, assuming that the take-off power training improves its effectiveness. The experiment included nine male swimmers. In the pre-test the swimmers performed three starts focusing on the best performance. Next, a plyometric training programme, adapted from sprint running, was introduced in order to increase a power of the lower extremities. The programme entailed 75 minute sessions conducted twice a week. Afterwards, a post-test was performed, analogous to the pre-test. Spatio-temporal structure data of the swimming start were gathered from video recordings of the swimmer above and under water. Impulses triggered by the plyometric training contributed to a shorter start time (the main measure of start effectiveness) and glide time as well as increasing average take-off, flight and glide velocities including take-off, entry and glide instantaneous velocities. The glide angle decreased. The changes in selected parameters of the swimming start and its confirmed diagnostic values, showed the areas to be susceptible to plyometric training and suggested that applied plyometric training programme aimed at increasing take-off power enhances the effectiveness of the swimming start.
NASA Astrophysics Data System (ADS)
Woelfl, A. C.; Jencks, J.; Johnston, G.; Varner, J. D.; Devey, C. W.
2017-12-01
Human activities are rapidly expanding into the oceans, yet detailed bathymetric maps do not exist for most of the seafloor that would permit governments to formulate sensible usage rules. Changing this situation will require an enormous international mapping effort. To ensure that this effort is directed towards the regions most in need of mapping, we need to know which areas have already been mapped and which areas are potentially most interesting. Despite various mapping efforts in recent years, large parts of the Atlantic still lack detailed bathymetric information. To successfully plan for future mapping efforts to fill these gaps, knowledge of current data coverage is imperative to avoid duplication of effort. While certain datasets are publically available online (e.g. NOAA's NCEI, EMODnet, IHO-DCDB, LDEO's GMRT), many are not. However, with the limited information we do have at hand, the question remains, where should we map next? And what criteria should we take into account? In 2016, a study was taken on as part of the efforts of the International Atlantic Seabed Mapping Working Group (ASMIWG). The ASMIWG, established by the Tri-Partite Galway Statement Implementation Committee, was tasked to develop a cohesive seabed mapping strategy for the Atlantic Ocean. The aim of our study was to develop a reproducible process for identifying and evaluating potential target areas within the North Atlantic that represent suitable sites for future bathymetric surveys. The sites were selected by applying a GIS-based suitability analysis that included specific user group-based parameters of the marine environment. Furthermore, information regarding current data coverage were gathered to take into account in the selection process. The results reveal the suitability of sites within the North Atlantic based on the selected criteria. Three potential target sites should be seen as flexible suggestions for future mapping initiatives rather than a rigid, defined set of areas. This methodology can be adjusted to other areas of interest and can include a variety of parameters based on stakeholder interest. Further this work only included accessible and displayable information about multibeam data coverage and would certainly benefit from more easily available and discoverable data sets or at least from location information.
Sudarmadji, Novella; Chua, Chee Kai; Leong, Kah Fai
2012-01-01
Computer-aided system for tissue scaffolds (CASTS) is an in-house parametric library of polyhedral units that can be assembled into customized tissue scaffolds. Thirteen polyhedral configurations are available to select, depending on the biological and mechanical requirements of the target tissue/organ. Input parameters include the individual polyhedral units and overall scaffold block as well as the scaffold strut diameter. Taking advantage of its repeatability and reproducibility, the scaffold file is then converted into .STL file and fabricated using selective laser sintering, a rapid prototyping system. CASTS seeks to fulfill anatomical, biological, and mechanical requirements of the target tissue/organ. Customized anatomical scaffold shape is achieved through a Boolean operation between the scaffold block and the tissue defect image. Biological requirements, such as scaffold pore size and porosity, are unique for different type of cells. Matching mechanical properties, such as stiffness and strength, between the scaffold and target organ is very important, particularly in the regeneration of load-bearing organ, i.e., bone. This includes mimicking the compressive stiffness variation across the bone to prevent stress shielding and ensuring that the scaffold can withstand the load normally borne by the bone. The stiffness variation is tailored by adjusting the scaffold porosity based on the porosity-stiffness relationship of the CASTS scaffolds. Two types of functional gradients based on the gradient direction include radial and axial/linear gradient. Radial gradient is useful in the case of regenerating a section of long bones while the gradient in linear direction can be used in short or irregular bones. Stiffness gradient in the radial direction is achieved by using cylindrical unit cells arranged in a concentric manner, in which the porosity decreases from the center of the structure toward the outside radius, making the scaffold stiffer at the outer radius and more porous at the center of the scaffold. On the other hand, the linear gradient is accomplished by varying the strut diameter along the gradient direction. The parameters to vary in both gradient types are the strut diameter, the unit cell dimension, and the boundaries between two scaffold regions with different stiffness.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
2010-02-01
calculated the target strength of the most intense partial wave, a quantity termed the “effective target strength” by Kaduchak and Loeffler (1998...ed., United States Naval Institute, Annapolis, 417 pp. Kaduchak, G. and Loeffler , C.M. (1998). “Relationship between material parameters and
Enhanced technologies for unattended ground sensor systems
NASA Astrophysics Data System (ADS)
Hartup, David C.
2010-04-01
Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.
Abdelaziz, Hadeer M; Gaber, Mohamed; Abd-Elwakil, Mahmoud M; Mabrouk, Moustafa T; Elgohary, Mayada M; Kamel, Nayra M; Kabary, Dalia M; Freag, May S; Samaha, Magda W; Mortada, Sana M; Elkhodairy, Kadria A; Fang, Jia-You; Elzoghby, Ahmed O
2018-01-10
There is progressive evolution in the use of inhalable drug delivery systems (DDSs) for lung cancer therapy. The inhalation route offers many advantages, being non-invasive method of drug administration as well as localized delivery of anti-cancer drugs to tumor tissue. This article reviews various inhalable colloidal systems studied for tumor-targeted drug delivery including polymeric, lipid, hybrid and inorganic nanocarriers. The active targeting approaches for enhanced delivery of nanocarriers to lung cancer cells were illustrated. This article also reviews the recent advances of inhalable microparticle-based drug delivery systems for lung cancer therapy including bioresponsive, large porous, solid lipid and drug-complex microparticles. The possible strategies to improve the aerosolization behavior and maintain the critical physicochemical parameters for efficient delivery of drugs deep into lungs were also discussed. Therefore, a strong emphasis is placed on the approaches which combine the merits of both nanocarriers and microparticles including inhalable nanocomposites and nanoaggregates and on the optimization of such formulations using the proper techniques and carriers. Finally, the toxicological behavior and market potential of the inhalable anti-cancer drug delivery systems are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Impact-parameter dependence of the energy loss of fast molecular clusters in hydrogen
NASA Astrophysics Data System (ADS)
Fadanelli, R. C.; Grande, P. L.; Schiwietz, G.
2008-03-01
The electronic energy loss of molecular clusters as a function of impact parameter is far less understood than atomic energy losses. For instance, there are no analytical expressions for the energy loss as a function of impact parameter for cluster ions. In this work, we describe two procedures to evaluate the combined energy loss of molecules: Ab initio calculations within the semiclassical approximation and the coupled-channels method using atomic orbitals; and simplified models for the electronic cluster energy loss as a function of the impact parameter, namely the molecular perturbative convolution approximation (MPCA, an extension of the corresponding atomic model PCA) and the molecular unitary convolution approximation (MUCA, a molecular extension of the previous unitary convolution approximation UCA). In this work, an improved ansatz for MPCA is proposed, extending its validity for very compact clusters. For the simplified models, the physical inputs are the oscillators strengths of the target atoms and the target-electron density. The results from these models applied to an atomic hydrogen target yield remarkable agreement with their corresponding ab initio counterparts for different angles between cluster axis and velocity direction at specific energies of 150 and 300 keV/u.
Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng
2012-12-01
This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
Characterising exoplanet atmospheres with SPHERE: the HR 8799 system with Exo-REM and NEMESIS
NASA Astrophysics Data System (ADS)
Baudino, J.-L.; Bonnefoy, M.; Vigan, A.; Irwin, P. J.
2017-12-01
The characterisation of the exoplanets evolved recently thanks to the beginning of the second generation of direct imaging instruments, especially with SPHERE. The resolution and wavelength range available currently give access to an increase of accuracy and on the number of physical parameters that can be constrain. One of the first target of SPHERE was the HR 8799 system. The four planets was characterised using four different forward models including Exo-REM. We complete this paper buy using NEMESIS, a retrieval code.
Radar target classification studies: Software development and documentation
NASA Astrophysics Data System (ADS)
Kamis, A.; Garber, F.; Walton, E.
1985-09-01
Three computer programs were developed to process and analyze calibrated radar returns. The first program, called DATABASE, was developed to create and manage a random accessed data base. The second program, called FTRAN DB, was developed to process horizontal and vertical polarizations radar returns into different formats (i.e., time domain, circular polarizations and polarization parameters). The third program, called RSSE, was developed to simulate a variety of radar systems and to evaluate their ability to identify radar returns. Complete computer listings are included in the appendix volumes.
MIYAZAKI, AKIRA; MIYAKE, HIDEAKI; HARADA, KEN-ICHI; INOUE, TAKA-AKI; FUJISAWA, MASATO
2015-01-01
The aim of this study was to evaluate the oncological efficacy of tyrosine kinase inhibitors (TKIs) as first-line molecular-targeted therapy for Japanese patients with metastatic renal cell carcinoma (mRCC) in a routine clinical setting. This study included a total of 271 consecutive Japanese patients with TKI-naive mRCC, including 172 patients who received sorafenib and 99 who received sunitinib for ≥2 months as a first-line molecular-targeted agent. The prognostic outcomes of these patients were retrospectively assessed. During the observation period (median, 19 months), 126 patients (46.5%) succumbed to the disease and the median overall survival (OS) for the entire cohort was 33.1 months. The univariate analysis identified the Memorial Sloan-Kettering Cancer Center (MSKCC) classification, C-reactive protein (CRP) level, lymph node metastasis, bone metastasis, liver metastasis, histological subtype and sarcomatoid characteristics as significant predictors of OS. Of these factors, only the MSKCC classification, CRP level and liver metastasis were found to be independently associated with OS in the multivariate analysis. Furthermore, there were significant differences in OS according to the positivity for these 3 independent risk factors (i.e., negative for all factors vs. positive for a single factor vs. positive for 2 or 3 factors). These findings suggest that the introduction of TKIs as first-line molecular-targeted agents resulted in favorable cancer control outcomes in Japanese mRCC patients and that the prognosis of these patients may be stratified by 3 potential parameters, including the MSKCC classification, CRP level and liver metastasis. PMID:26137274
SAR Image Simulation of Ship Targets Based on Multi-Path Scattering
NASA Astrophysics Data System (ADS)
Guo, Y.; Wang, H.; Ma, H.; Li, K.; Xia, Z.; Hao, Y.; Guo, H.; Shi, H.; Liao, X.; Yue, H.
2018-04-01
Synthetic Aperture Radar (SAR) plays an important role in the classification and recognition of ship targets because of its all-weather working ability and fine resolution. In SAR images, besides the sea clutter, the influence of the sea surface on the radar echo is also known as the so-called multipath effect. These multipath effects will generate some extra "pseudo images", which may cause the distortion of the target image and affect the estimation of the characteristic parameters. In this paper,the multipath effect of rough sea surface and its influence on the estimation of ship characteristic parameters are studied. The imaging of the first and the secondary reflection of sea surface is presented . The artifacts not only overlap with the image of the target itself, but may also appear in the sea near the target area. It is difficult to distinguish them, and this artifact has an effect on the length and width of the ship.
Precision ephemerides for gravitational-wave searches - III. Revised system parameters of Sco X-1
NASA Astrophysics Data System (ADS)
Wang, L.; Steeghs, D.; Galloway, D. K.; Marsh, T.; Casares, J.
2018-06-01
Neutron stars in low-mass X-ray binaries are considered promising candidate sources of continuous gravitational-waves. These neutron stars are typically rotating many hundreds of times a second. The process of accretion can potentially generate and support non-axisymmetric distortions to the compact object, resulting in persistent emission of gravitational-waves. We present a study of existing optical spectroscopic data for Sco X-1, a prime target for continuous gravitational-wave searches, with the aim of providing revised constraints on key orbital parameters required for a directed search with advanced-LIGO data. From a circular orbit fit to an improved radial velocity curve of the Bowen emission components, we derived an updated orbital period and ephemeris. Centre of symmetry measurements from the Bowen Doppler tomogram yield a centre of the disc component of 90 km s-1, which we interpret as a revised upper limit to the projected orbital velocity of the NS K1. By implementing Monte Carlo binary parameter calculations, and imposing new limits on K1 and the rotational broadening, we obtained a complete set of dynamical system parameter constraints including a new range for K1 of 40-90 km s-1. Finally, we discussed the implications of the updated orbital parameters for future continuous-waves searches.
NASA Astrophysics Data System (ADS)
Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli
2018-04-01
Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.
The DINGO dataset: a comprehensive set of data for the SAMPL challenge
NASA Astrophysics Data System (ADS)
Newman, Janet; Dolezal, Olan; Fazio, Vincent; Caradoc-Davies, Tom; Peat, Thomas S.
2012-05-01
Part of the latest SAMPL challenge was to predict how a small fragment library of 500 commercially available compounds would bind to a protein target. In order to assess the modellers' work, a reasonably comprehensive set of data was collected using a number of techniques. These included surface plasmon resonance, isothermal titration calorimetry, protein crystallization and protein crystallography. Using these techniques we could determine the kinetics of fragment binding, the energy of binding, how this affects the ability of the target to crystallize, and when the fragment did bind, the pose or orientation of binding. Both the final data set and all of the raw images have been made available to the community for scrutiny and further work. This overview sets out to give the parameters of the experiments done and what might be done differently for future studies.
New method to design stellarator coils without the winding surface
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2017-11-06
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
Turner, Richard; Joseph, Adrian; Titchener-Hooker, Nigel; Bender, Jean
2017-08-04
Cell harvesting is the separation or retention of cells and cellular debris from the supernatant containing the target molecule Selection of harvest method strongly depends on the type of cells, mode of bioreactor operation, process scale, and characteristics of the product and cell culture fluid. Most traditional harvesting methods use some form of filtration, centrifugation, or a combination of both for cell separation and/or retention. Filtration methods include normal flow depth filtration and tangential flow microfiltration. The ability to scale down predictably the selected harvest method helps to ensure successful production and is critical for conducting small-scale characterization studies for confirming parameter targets and ranges. In this chapter we describe centrifugation and depth filtration harvesting methods, share strategies for harvest optimization, present recent developments in centrifugation scale-down models, and review alternative harvesting technologies.
New method to design stellarator coils without the winding surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Pollalis, Yannis A.
2012-12-01
In this paper, optimal environmental policy for reclamation of land unearthed in lignite mines is defined as a strategic target. The tactics concerning the achievement of this target, includes estimation of optimal time lag between each lignite site (which is a segment of the whole lignite field) complete exploitation and its reclamation. Subsidizing of reclamation has been determined as a function of this time lag and relevant implementation is presented for parameter values valid for the Greek economy. We proved that the methodology we have developed gives reasonable quantitative results within the norms imposed by legislation. Moreover, the interconnection between strategy and tactics becomes evident, since the former causes the latter by deduction and the latter revises the former by induction in the time course of land reclamation.
Parametric entry corridors for lunar/Mars aerocapture missions
NASA Technical Reports Server (NTRS)
Ling, Lisa M.; Baseggio, Franco M.; Fuhry, Douglas P.
1991-01-01
Parametric atmospheric entry corridor data are presented for Earth and Mars aerocapture. Parameter ranges were dictated by the range of mission designs currently envisioned as possibilities for the Human Exploration Initiative (HEI). This data, while not providing a means for exhaustive evaluation of aerocapture performance, should prove to be a useful aid for preliminary mission design and evaluation. Entry corridors are expressed as ranges of allowable vacuum periapse altitude of the planetary approach hyperbolic orbit, with chart provided for conversion to an approximate flight path angle corridor at entry interface (125 km altitude). The corridor boundaries are defined by open-loop aerocapture trajectories which satisfy boundary constraints while utilizing the full aerodynamic control capability of the vehicle (i.e., full lift-up or full lift-down). Parameters examined were limited to those of greatest importance from an aerocapture performance standpoint, including the approach orbit hyperbolic excess velocity, the vehicle lift to drag ratio, maximum aerodynamic load factor limit, and the apoapse of the target orbit. The impact of the atmospheric density bias uncertainties are also included. The corridor data is presented in graphical format, and examples of the utilization of these graphs for mission design and evaluation are included.
Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi
2018-04-23
The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Nanotube Activities at NASA-Johnson Space Center
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram
2004-01-01
Nanotube activities at NASA-Johnson Space Center include production, purification, characterization as well as applications of single wall carbon nanotubes (SWCNTs). A parametric study of the pulsed laser ablation process is recently completed to monitor the effect of production parameters including temperature, buffer gas, flow rate, pressure, and laser fluence. Enhancement of production is achieved by rastering the graphite target and by increasing the target surface temperature with a cw laser. In-situ diagnostics during production included time resolved passive emission and laser induced fluorescence from the plume. The improvement of the purity by a variety of steps in the purification process is monitored by characterization techniques including SEM, TEM, Raman, UV-VIS-NIR and TGA. A recently established NASA-JSC protocol for SWCNT characterization is undergoing revision with feedback from nanotube community. Efforts at JSC over the past five years in composites have centered on structural polymer/nanotube systems. Recent activities broadened this focus to multifunctional materials, supercapacitors, fuel cells, regenerable CO2 absorbers, electromagnetic shielding, radiation dosimetry and thermal management systems of interest for human space flight. Preliminary tests indicate improvement of performance in most of these applications because of the large Surface area as well as high electrical and thermal conductivity exhibited by SWCNTs. Comparison with existing technologies and possible future improvements in the SWCNT materials sill be presented.
NASA Astrophysics Data System (ADS)
Dobnik, David; Štebih, Dejan; Blejec, Andrej; Morisset, Dany; Žel, Jana
2016-10-01
The advantages of the digital PCR technology are already well documented until now. One way to achieve better cost efficiency of the technique is to use it in a multiplexing strategy. Droplet digital PCR platforms, which include two fluorescence filters, support at least duplex reactions and with some developments and optimization higher multiplexing is possible. The present study not only shows a development of multiplex assays in droplet digital PCR, but also presents a first thorough evaluation of several parameters in such multiplex digital PCR. Two 4-plex assays were developed for quantification of 8 different DNA targets (7 genetically modified maize events and maize endogene). Per assay, two of the targets were labelled with one fluorophore and two with another. As current analysis software does not support analysis of more than duplex, a new R- and Shiny-based web application analysis tool (http://bit.ly/ddPCRmulti) was developed that automates the analysis of 4-plex results. In conclusion, the two developed multiplex assays are suitable for quantification of GMO maize events and the same approach can be used in any other field with a need for accurate and reliable quantification of multiple DNA targets.
Dobnik, David; Štebih, Dejan; Blejec, Andrej; Morisset, Dany; Žel, Jana
2016-10-14
The advantages of the digital PCR technology are already well documented until now. One way to achieve better cost efficiency of the technique is to use it in a multiplexing strategy. Droplet digital PCR platforms, which include two fluorescence filters, support at least duplex reactions and with some developments and optimization higher multiplexing is possible. The present study not only shows a development of multiplex assays in droplet digital PCR, but also presents a first thorough evaluation of several parameters in such multiplex digital PCR. Two 4-plex assays were developed for quantification of 8 different DNA targets (7 genetically modified maize events and maize endogene). Per assay, two of the targets were labelled with one fluorophore and two with another. As current analysis software does not support analysis of more than duplex, a new R- and Shiny-based web application analysis tool (http://bit.ly/ddPCRmulti) was developed that automates the analysis of 4-plex results. In conclusion, the two developed multiplex assays are suitable for quantification of GMO maize events and the same approach can be used in any other field with a need for accurate and reliable quantification of multiple DNA targets.
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Wallops waveform analysis of SEASAT-1 radar altimeter data
NASA Technical Reports Server (NTRS)
Hayne, G. S.
1980-01-01
Fitting a six parameter model waveform to over ocean experimental data from the waveform samplers in the SEASAT-1 radar altimeter is described. The fitted parameters include a waveform risetime, skewness, and track point; from these can be obtained estimates of the ocean surface significant waveheight, the surface skewness, and a correction to the altimeter's on board altitude measurement, respectively. Among the difficulties encountered are waveform sampler gains differing from calibration mode data, and incorporating the actual SEASAT-1 sampled point target response in the fitted wave form. There are problems in using the spacecraft derived attitude angle estimates, and a different attitude estimator is developed. Points raised in this report have consequences for the SEASAT-1 radar altimeter's ocean surface measurements are for the design and calibration of radar altimeters in future oceanographic satellites.
Ertl, P
1998-02-01
Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.
Shock-induced damage in rocks: Application to impact cratering
NASA Astrophysics Data System (ADS)
Ai, Huirong
Shock-induced damage beneath impact craters is studied in this work. Two representative terrestrial rocks, San Marcos granite and Bedford limestone, are chosen as test target. Impacts into the rock targets with different combinations of projectile material, size, impact angle, and impact velocity are carried out at cm scale in the laboratory. Shock-induced damage and fracturing would cause large-scale compressional wave velocity reduction in the recovered target beneath the impact crater. The shock-induced damage is measured by mapping the compressional wave velocity reduction in the recovered target. A cm scale nondestructive tomography technique is developed for this purpose. This technique is proved to be effective in mapping the damage in San Marcos granite, and the inverted velocity profile is in very good agreement with the result from dicing method and cut open directly. Both compressional velocity and attenuation are measured in three orthogonal directions on cubes prepared from one granite target impacted by a lead bullet at 1200 m/s. Anisotropy is observed from both results, but the attenuation seems to be a more useful parameter than acoustic velocity in studying orientation of cracks. Our experiments indicate that the shock-induced damage is a function of impact conditions including projectile type and size, impact velocity, and target properties. Combined with other crater phenomena such as crater diameter, depth, ejecta, etc., shock-induced damage would be used as an important yet not well recognized constraint for impact history. The shock-induced damage is also calculated numerically to be compared with the experiments for a few representative shots. The Johnson-Holmquist strength and failure model, initially developed for ceramics, is applied to geological materials. Strength is a complicated function of pressure, strain, strain rate, and damage. The JH model, coupled with a crack softening model, is used to describe both the inelastic response of rocks in the compressive field near the impact source and the tensile failure in the far field. The model parameters are determined either from direct static measurements, or from indirect numerical adjustment. The agreement between the simulation and experiment is very encouraging.
Ren, Chong; McGrath, Colman; Jin, Lijian; Zhang, Chengfei; Yang, Yanqi
2016-09-01
This study aimed to systematically assess the parameter-specific effects of the diode low-level laser on human gingival fibroblasts (HGFs) and human periodontal ligament fibroblasts (HPDLFs). An extensive search was performed in major electronic databases including PubMed (1997), EMBASE (1947) and Web of Science (1956) and supplemented by hand search of reference lists and relevant laser journals for cell culture studies investigating the effect of diode low-level lasers on HGFs and HPDLFs published from January 1995 to December 2015. A total of 21 studies were included after screening 324 independent records, amongst which eight targeted HPDLFs and 13 focussed on HGFs. The diode low-level laser showed positive effects on promoting fibroblast proliferation and osteogenic differentiation and modulating cellular inflammation via changes in gene expression and the release of growth factors, bone-remodelling markers or inflammatory mediators in a parameter-dependent manner. Repeated irradiations with wavelengths in the red and near-infrared range and at an energy density below 16 J/cm(2) elicited favourable responses. However, considerable variations and weaknesses in the study designs and laser protocols limited the interstudy comparison and clinical transition. Current evidence showed that diode low-level lasers with adequate parameters stimulated the proliferation and modulated the inflammation of fibroblasts derived from human periodontal tissue. However, further in vitro studies with better designs and more appropriate study models and laser parameters are anticipated to provide sound evidence for clinical studies and practice.
Chiroli, Silvia; Mattin, Caroline; Belozeroff, Vasily; Perrault, Louise; Mitchell, Dominic; Gioni, Ioanna
2012-10-29
Secondary hyperparathyroidism (SHPT) is associated with mortality in patients with chronic kidney disease (CKD), but the economic consequences of SHPT have not been adequately studied in the European population. We assessed the relationship between SHPT parameters (intact parathyroid hormone [iPTH], calcium, and phosphate) and hospitalisations, medication use, and associated costs among CKD patients in Europe. The analysis of this retrospective cohort study used records of randomly selected patients who underwent haemodialysis between January 1, 2005 and December 31, 2006 at participating European Fresenius Medical Care facilities in 10 countries. Patients had ≥ 1 iPTH value recorded, and ≥ 1 month of follow-up after a 3-month baseline period during which SHPT parameters were assessed. Time at risk was post-baseline until death, successful renal transplantation, loss to follow-up, or the end of follow-up. Outcomes included cost per patient-month, rates of hospitalisations (cardiovascular disease [CVD], fractures, and parathyroidectomy [PTX]), and use of SHPT-, diabetes-, and CVD-related medications. National costs were applied to hospitalisations and medication use. Generalised linear models compared costs across strata of iPTH, total calcium, and phosphate, adjusting for baseline covariates. There were 6369 patients included in the analysis. Mean ± SD person-time at risk was 13.1 ± 6.4 months. Patients with iPTH > 600 pg/mL had a higher hospitalisation rate than those with lower iPTH. Hospitalisation rates varied little across calcium and phosphate levels. SHPT-related medication use varied with iPTH, calcium, and phosphate. After adjusting for demographic and clinical variables, patients with baseline iPTH > 600 pg/mL had 41% (95% CI: 25%, 59%) higher monthly total healthcare costs compared with those with iPTH in the K/DOQI target range (150-300 pg/mL). Patients with baseline phosphate and total calcium levels above target ranges (1.13-1.78 mmol/L and 2.10-2.37 mmol/L, respectively) had 38% (95% CI: 27%, 50%) and 8% (95% CI: 0%, 17%) higher adjusted monthly costs, respectively. Adjusted costs were 25% (95% CI: 18%, 32%) lower among patients with baseline phosphate levels below the target range. Results were consistent in sensitivity analyses. These data suggest that elevated SHPT parameters increase the economic burden of CKD in Europe.
The Gaia-ESO Survey: Calibration strategy
NASA Astrophysics Data System (ADS)
Pancino, E.; Lardo, C.; Altavilla, G.; Marinoni, S.; Ragaini, S.; Cocozza, G.; Bellazzini, M.; Sabbi, E.; Zoccali, M.; Donati, P.; Heiter, U.; Koposov, S. E.; Blomme, R.; Morel, T.; Símon-Díaz, S.; Lobel, A.; Soubiran, C.; Montalban, J.; Valentini, M.; Casey, A. R.; Blanco-Cuaresma, S.; Jofré, P.; Worley, C. C.; Magrini, L.; Hourihane, A.; François, P.; Feltzing, S.; Gilmore, G.; Randich, S.; Asplund, M.; Bonifacio, P.; Drew, J. E.; Jeffries, R. D.; Micela, G.; Vallenari, A.; Alfaro, E. J.; Allende Prieto, C.; Babusiaux, C.; Bensby, T.; Bragaglia, A.; Flaccomio, E.; Hambly, N.; Korn, A. J.; Lanzafame, A. C.; Smiljanic, R.; Van Eck, S.; Walton, N. A.; Bayo, A.; Carraro, G.; Costado, M. T.; Damiani, F.; Edvardsson, B.; Franciosini, E.; Frasca, A.; Lewis, J.; Monaco, L.; Morbidelli, L.; Prisinzano, L.; Sacco, G. G.; Sbordone, L.; Sousa, S. G.; Zaggia, S.; Koch, A.
2017-02-01
The Gaia-ESO survey (GES) is now in its fifth and last year of observations and has produced tens of thousands of high-quality spectra of stars in all Milky Way components. This paper presents the strategy behind the selection of astrophysical calibration targets, ensuring that all GES results on radial velocities, atmospheric parameters, and chemical abundance ratios will be both internally consistent and easily comparable with other literature results, especially from other large spectroscopic surveys and from Gaia. The calibration of GES is particularly delicate because of (I) the large space of parameters covered by its targets, ranging from dwarfs to giants, from O to M stars; these targets have a large wide of metallicities and also include fast rotators, emission line objects, and stars affected by veiling; (II) the variety of observing setups, with different wavelength ranges and resolution; and (III) the choice of analyzing the data with many different state-of-the-art methods, each stronger in a different region of the parameter space, which ensures a better understanding of systematic uncertainties. An overview of the GES calibration and homogenization strategy is also given, along with some examples of the usage and results of calibrators in GES iDR4, which is the fourth internal GES data release and will form the basis of the next GES public data release. The agreement between GES iDR4 recommended values and reference values for the calibrating objects are very satisfactory. The average offsets and spreads are generally compatible with the GES measurement errors, which in iDR4 data already meet the requirements set by the main GES scientific goals. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 188.B-3002 and 193.B-0936.Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A5
NASA Astrophysics Data System (ADS)
Watkins, Wendell R.; Bean, Brent L.; Munding, Peter D.
1994-06-01
Recent field tests have provided excellent opportunities to use a new characterization tool associated with the Mobile Imaging Spectroscopy Laboratory (MISL) of the Battlefield Environment Directorate, formerly the U.S. Army Atmospheric Sciences Laboratory. The MISL large area (1.8 by 1.8 m, uniform temperature, thermal target) was used for characterization and isolation of phenomena which impact target contrast. By viewing the target board from closeup and distant ranges simultaneously with the MISL thermal imagers, the inherent scene content could be calibrated and the degrading effects of atmospheric propagation could be isolated. The target board is equipped with several spatial frequency bar patterns, but only the largest 3.5-cycle full area bar pattern was used for the distant range of 1.6 km. The quantities measured with the target board include the inherent background change, the contrast transmission, and the atmospheric modulation transfer function. The MISL target board has a unique design which makes it lightweight with near perfect transition between the hot and cold portions of the bar pattern. The heated portion of the target is an elongated rectangular even which is tilted back at a 30 deg angle to form a 1.8 by 1.8 m square when viewed from the front. The cold bars we positioned in front of the heated oven surface and can be oriented in either the vertical or horizontal direction. The oven is mounted on a lightweight trailer for one- or two-man positioning. An attached metal and canvas structure is used to shield the entire target from both solar loading and cooling winds. The target board has a thin aluminum sheet front surface which is insulated from the oven's heating structure.
Single-Isocenter Multiple-Target Stereotactic Radiosurgery: Risk of Compromised Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, Justin, E-mail: justin.roper@emory.edu; Department of Biostatistics and Bioinformatics, Winship Cancer Institute of Emory University, Atlanta, Georgia; Chanyavanich, Vorakarn
2015-11-01
Purpose: To determine the dosimetric effects of rotational errors on target coverage using volumetric modulated arc therapy (VMAT) for multitarget stereotactic radiosurgery (SRS). Methods and Materials: This retrospective study included 50 SRS cases, each with 2 intracranial planning target volumes (PTVs). Both PTVs were planned for simultaneous treatment to 21 Gy using a single-isocenter, noncoplanar VMAT SRS technique. Rotational errors of 0.5°, 1.0°, and 2.0° were simulated about all axes. The dose to 95% of the PTV (D95) and the volume covered by 95% of the prescribed dose (V95) were evaluated using multivariate analysis to determine how PTV coverage was relatedmore » to PTV volume, PTV separation, and rotational error. Results: At 0.5° rotational error, D95 values and V95 coverage rates were ≥95% in all cases. For rotational errors of 1.0°, 7% of targets had D95 and V95 values <95%. Coverage worsened substantially when the rotational error increased to 2.0°: D95 and V95 values were >95% for only 63% of the targets. Multivariate analysis showed that PTV volume and distance to isocenter were strong predictors of target coverage. Conclusions: The effects of rotational errors on target coverage were studied across a broad range of SRS cases. In general, the risk of compromised coverage increased with decreasing target volume, increasing rotational error and increasing distance between targets. Multivariate regression models from this study may be used to quantify the dosimetric effects of rotational errors on target coverage given patient-specific input parameters of PTV volume and distance to isocenter.« less
Chatonnet, A; Hotelier, T; Cousin, X
1999-05-14
Cholinesterases are targets for organophosphorus compounds which are used as insecticides, chemical warfare agents and drugs for the treatment of disease such as glaucoma, or parasitic infections. The widespread use of these chemicals explains the growing of this area of research and the ever increasing number of sequences, structures, or biochemical data available. Future advances will depend upon effective management of existing information as well as upon creation of new knowledge. The ESTHER database goal is to facilitate retrieval and comparison of data about structure and function of proteins presenting the alpha/beta hydrolase fold. Protein engineering and in vitro production of enzymes allow direct comparison of biochemical parameters. Kinetic parameters of enzymatic reactions are now included in the database. These parameters can be searched and compared with a table construction tool. ESTHER can be reached through internet (http://www.ensam.inra.fr/cholinesterase). The full database or the specialised X-window Client-server system can be downloaded from our ftp server (ftp://ftp.toulouse.inra.fr./pub/esther). Forms can be used to send updates or corrections directly from the web.
Analysis of the methods for assessing socio-economic development level of urban areas
NASA Astrophysics Data System (ADS)
Popova, Olga; Bogacheva, Elena
2017-01-01
The present paper provides a targeted analysis of current approaches (ratings) in the assessment of socio-economic development of urban areas. The survey focuses on identifying standardized methodologies to area assessment techniques formation that will result in developing the system of intelligent monitoring, dispatching, building management, scheduling and effective management of an administrative-territorial unit. This system is characterized by complex hierarchical structure, including tangible and intangible properties (parameters, attributes). Investigating the abovementioned methods should increase the administrative-territorial unit's attractiveness for investors and residence. The research aims at studying methods for evaluating socio-economic development level of the Russian Federation territories. Experimental and theoretical territory estimating methods were revealed. Complex analysis of the characteristics of the areas was carried out and evaluation parameters were determined. Integral indicators (resulting rating criteria values) as well as the overall rankings (parameters, characteristics) were analyzed. The inventory of the most widely used partial indicators (parameters, characteristics) of urban areas was revealed. The resulting criteria of rating values homogeneity were verified and confirmed by determining the root mean square deviation, i.e. divergence of indices. The principal shortcomings of assessment methodologies were revealed. The assessment methods with enhanced effectiveness and homogeneity were proposed.
Cinque, Kathy; Jayasuriya, Niranjali
2010-12-01
To ensure the protection of drinking water an understanding of the catchment processes which can affect water quality is important as it enables targeted catchment management actions to be implemented. In this study factor analysis (FA) and comparing event mean concentrations (EMCs) with baseline values were techniques used to asses the relationships between water quality parameters and linking those parameters to processes within an agricultural drinking water catchment. FA found that 55% of the variance in the water quality data could be explained by the first factor, which was dominated by parameters usually associated with erosion. Inclusion of pathogenic indicators in an additional FA showed that Enterococcus and Clostridium perfringens (C. perfringens) were also related to the erosion factor. Analysis of the EMCs found that most parameters were significantly higher during periods of rainfall runoff. This study shows that the most dominant processes in an agricultural catchment are surface runoff and erosion. It also shows that it is these processes which mobilise pathogenic indicators and are therefore most likely to influence the transport of pathogens. Catchment management efforts need to focus on reducing the effect of these processes on water quality.
Electrochemical reduction of CerMet fuels for transmutation using surrogate CeO2-Mo pellets
NASA Astrophysics Data System (ADS)
Claux, B.; Souček, P.; Malmbeck, R.; Rodrigues, A.; Glatz, J.-P.
2017-08-01
One of the concepts chosen for the transmutation of minor actinides in Accelerator Driven Systems or fast reactors proposes the use of fuels and targets containing minor actinides oxides embedded in an inert matrix either composed of molybdenum metal (CerMet fuel) or of ceramic magnesium oxide (CerCer fuel). Since the sufficient transmutation cannot be achieved in a single step, it requires multi-recycling of the fuel including recovery of the not transmuted minor actinides. In the present work, a pyrochemical process for treatment of Mo metal inert matrix based CerMet fuels is studied, particularly the electroreduction in molten chloride salt as a head-end step required prior the main separation process. At the initial stage, different inactive pellets simulating the fuel containing CeO2 as minor actinide surrogates were examined. The main studied parameters of the process efficiency were the porosity and composition of the pellets and the process parameters as current density and passed charge. The results indicated the feasibility of the process, gave insight into its limiting parameters and defined the parameters for the future experiment on minor actinide containing material.
Measurement of the main and critical parameters for optimal laser treatment of heart disease
NASA Astrophysics Data System (ADS)
Kabeya, FB; Abrahamse, H.; Karsten, AE
2017-10-01
Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.
Metrological reliability of optical coherence tomography in biomedical applications
NASA Astrophysics Data System (ADS)
Goloni, C. M.; Temporão, G. P.; Monteiro, E. C.
2013-09-01
Optical coherence tomography (OCT) has been proving to be an efficient diagnostics technique for imaging in vivo tissues, an optical biopsy with important perspectives as a diagnostic tool for quantitative characterization of tissue structures. Despite its established clinical use, there is no international standard to address the specific requirements for basic safety and essential performance of OCT devices for biomedical imaging. The present work studies the parameters necessary for conformity assessment of optoelectronics equipment used in biomedical applications like Laser, Intense Pulsed Light (IPL), and OCT, targeting to identify the potential requirements to be considered in the case of a future development of a particular standard for OCT equipment. In addition to some of the particular requirements standards for laser and IPL, also applicable for metrological reliability analysis of OCT equipment, specific parameters for OCT's evaluation have been identified, considering its biomedical application. For each parameter identified, its information on the accompanying documents and/or its measurement has been recommended. Among the parameters for which the measurement requirement was recommended, including the uncertainty evaluation, the following are highlighted: optical radiation output, axial and transverse resolution, pulse duration and interval, and beam divergence.
Manipulating perceptual parameters in a continuous performance task.
Shalev, Nir; Humphreys, Glyn; Demeyere, Nele
2018-02-01
Sustained attention (SA) is among the most studied faculties of human cognition, and thought to be crucial for many aspects of behavior. Measuring SA often relies on performance on a continuous, low-demanding task. Such continuous performance tasks (CPTs) have many variations, and sustained attention is typically estimated based on variability in reaction times. While relying on reaction times may be useful in some cases, it can pose a challenge when working with clinical populations. To increase interpersonal variability in task parameters that do not rely on speed, researchers have increased demands for memory and response inhibition. These approaches, however, may be confounded when used to assess populations that suffer from multiple cognitive deficits. In the current study, we propose a new approach for increasing task variability by increasing the attentional demands. In order to do so, we created a new variation of a CPT - a masked version, where inattention is more likely to cause misidentifying a target. After establishing that masking indeed decreases target detection, we further investigated which task parameter may influence response biases. To do so, we contrasted two versions of the CPT with different target/distractor ratio. We then established how perceptual parameters can be controlled independently in a CPT. Following the experimental manipulations, we tested the MCCPT with aging controls and chronic stroke patients to assure the task can be used with target populations. The results confirm the MCCPT as a task providing high sensitivity without relying on reaction speed, and feasible for patients.
NASA Astrophysics Data System (ADS)
Thajudeen, Christopher
Through-the-wall imaging (TWI) is a topic of current interest due to its wide range of public safety, law enforcement, and defense applications. Among the various available technologies such as, acoustic, thermal, and optical imaging, which can be employed to sense and image targets of interest, electromagnetic (EM) imaging, in the microwave frequency bands, is the most widely utilized technology and has been at the forefront of research in recent years. The primary objectives for any Through-the-Wall Radar Imaging (TWRI) system are to obtain a layout of the building and/or inner rooms, detect if there are targets of interest including humans or weapons, determine if there are countermeasures being employed to further obscure the contents of a building or room of interest, and finally to classify the detected targets. Unlike conventional radar scenarios, the presence of walls, made of common construction materials such as brick, drywall, plywood, cinder block, and solid concrete, adversely affects the ability of any conventional imaging technique to properly image targets enclosed within building structures as the propagation through the wall can induce shadowing effects on targets of interest which may result in image degradation, errors in target localization, and even complete target masking. For many applications of TWR systems, the wall ringing signals are strong enough to mask the returns from targets not located a sufficient distance behind the wall, beyond the distance of the wall ringing, and thus without proper wall mitigation, target detection becomes extremely difficult. The results presented in this thesis focus on the development of wall parameter estimation, and intra-wall and wall-type characterization techniques for use in both the time and frequency domains as well as analysis of these techniques under various real world scenarios such as reduced system bandwidth scenarios, various wall backing scenarios, the case of inhomogeneous walls, presence of ground reflections, and situations where they may be applied to the estimation of the parameters associated with an interior wall. It is demonstrated through extensive computer simulations and laboratory experiments that, by proper exploitation of the electromagnetic characteristics of walls, one can efficiently extract the constitutive parameters associated with unknown wall(s) as well as to characterize and image the intra-wall region. Additionally, it is possible, to a large extent, to remove the negative wall effects, such as shadowing and incorrect target localization, as well as to enhance the imaging and classification of targets behind walls. In addition to the discussion of post processing the radar data to account for wall effects, the design of antenna elements used for transmit (Tx) and receive (Rx) operations in TWR radars is also discussed but limited to antennas for mobile, handheld, or UAV TWR systems which impose design requirements such as low profiles, wide operational bands, and in most cases lend themselves to fabrication using surface printing techniques. A new class of wideband antennas, formed though the use of printed metallic paths in the form of Peano and Hilbert space-filling curves (SFC) to provide top-loading properties that miniaturize monopole antenna elements, has been developed for applications in conformal and/or low profile antennas systems, such as mobile platforms for TWRI and communication systems. Additionally, boresight gain enhancements of a stair-like antenna geometry, through the addition of parasitic self-similar patches and gate like ground plane structures, are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald Martin
The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UC x material atmore » reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 10 13 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.« less
Scanned carbon beam irradiation of moving films: comparison of measured and calculated response
2012-01-01
Background Treatment of moving target volumes with scanned particle beams benefits from treatment planning that includes the time domain (4D). Part of 4D treatment planning is calculation of the expected result. These calculation codes should be verified against suitable measurements. We performed simulations and measurements to validate calculation of the film response in the presence of target motion. Methods All calculations were performed with GSI's treatment planning system TRiP. Interplay patterns between scanned particle beams and moving film detectors are very sensitive to slight deviations of the assumed motion parameters and therefore ideally suited to validate 4D calculations. In total, 14 film motion parameter combinations with lateral motion amplitudes of 8, 15, and 20 mm and 4 combinations for lateral motion including range changes were used. Experimental and calculated film responses were compared by relative difference, mean deviation in two regions-of-interest, as well as line profiles. Results Irradiations of stationary films resulted in a mean relative difference of -1.52% ± 2.06% of measured and calculated responses. In comparison to this reference result, measurements with translational film motion resulted in a mean difference of -0.92% ± 1.30%. In case of irradiations incorporating range changes with a stack of 5 films as detector the deviations increased to -6.4 ± 2.6% (-10.3 ± 9.0% if film in distal fall-off is included) in comparison to -3.6% ± 2.5% (-13.5% ± 19.9% including the distal film) for the stationary irradiation. Furthermore, the comparison of line profiles of 4D calculations and experimental data showed only slight deviations at the borders of the irradiated area. The comparisons of pure lateral motion were used to determine the number of motion states that are required for 4D calculations depending on the motion amplitude. 6 motion states per 10 mm motion amplitude are sufficient to calculate the film response in the presence of motion. Conclusions By comparison to experimental data, the 4D extension of GSI's treatment planning system TRiP has been successfully validated for film response calculations in the presence of target motion within the accuracy limitation given by film-based dosimetry. PMID:22462523
Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.
NASA Astrophysics Data System (ADS)
Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.
2016-12-01
This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.
Analysis of the effect on optical equipment caused by solar position in target flight measure
NASA Astrophysics Data System (ADS)
Zhu, Shun-hua; Hu, Hai-bin
2012-11-01
Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is sensitive to the sun's rays. In order to avoid the disadvantage of sun's rays directly shines to the optical equipment camera lens when measuring target flight parameters, the angle between observation direction and the line which connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation direction model and the calculating model of angle between observation direction and the line which connects optical equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by solar position at different time, different date, different month and different target flight direction is given in this article.
Rayleigh scattering of twisted light by hydrogenlike ions
NASA Astrophysics Data System (ADS)
Peshkov, A. A.; Volotka, A. V.; Surzhykov, A.; Fritzsche, S.
2018-02-01
The elastic Rayleigh scattering of twisted light and, in particular, the polarization (transfer) of the scattered photons have been analyzed within the framework of second-order perturbation theory and Dirac's relativistic equation. Special attention was paid hereby to the scattering on three different atomic targets: single atoms, a mesoscopic (small) target, and a macroscopic (large) target, which are all centered with regard to the beam axis. Detailed calculations of the polarization Stokes parameters were performed for C5 + ions and for twisted Bessel beams. It is shown that the polarization of scattered photons is sensitive to the size of an atomic target and to the helicity, the opening angle, and the projection of the total angular momentum of the incident Bessel beam. These computations indicate more that the Stokes parameters of the (Rayleigh) scattered twisted light may significantly differ from their behavior for an incident plane-wave radiation.
Compressed Sensing in On-Grid MIMO Radar.
Minner, Michael F
2015-01-01
The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.
Bacterial cells enhance laser driven ion acceleration
Dalui, Malay; Kundu, M.; Trivikram, T. Madhu; Rajeev, R.; Ray, Krishanu; Krishnamurthy, M.
2014-01-01
Intense laser produced plasmas generate hot electrons which in turn leads to ion acceleration. Ability to generate faster ions or hotter electrons using the same laser parameters is one of the main outstanding paradigms in the intense laser-plasma physics. Here, we present a simple, albeit, unconventional target that succeeds in generating 700 keV carbon ions where conventional targets for the same laser parameters generate at most 40 keV. A few layers of micron sized bacteria coating on a polished surface increases the laser energy coupling and generates a hotter plasma which is more effective for the ion acceleration compared to the conventional polished targets. Particle-in-cell simulations show that micro-particle coated target are much more effective in ion acceleration as seen in the experiment. We envisage that the accelerated, high-energy carbon ions can be used as a source for multiple applications. PMID:25102948
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Stock assessment of fishery target species in Lake Koka, Ethiopia.
Tesfaye, Gashaw; Wolff, Matthias
2015-09-01
Effective management is essential for small-scale fisheries to continue providing food and livelihoods for households, particularly in developing countries where other options are often limited. Studies on the population dynamics and stock assessment on fishery target species are thus imperative to sustain their fisheries and the benefits for the society. In Lake Koka (Ethiopia), very little is known about the vital population parameters and exploitation status of the fishery target species: tilapia Oreochromis niloticus, common carp Cyprinus carpio and catfish Clarias gariepinus. Our study, therefore, aimed at determining the vital population parameters and assessing the status of these target species in Lake Koka using length frequency data collected quarterly from commercial catches from 2007-2012. A total of 20,097 fish specimens (distributed as 7,933 tilapia, 6,025 catfish and 6,139 common carp) were measured for the analysis. Von Bertalarffy growth parameters and their confidence intervals were determined from modal progression analysis using ELEFAN I and applying the jackknife technique. Mortality parameters were determined from length-converted catch curves and empirical models. The exploitation status of these target species were then assessed by computing exploitation rates (E) from mortality parameters as well as from size indicators i.e., assessing the size distribution of fish catches relative to the size at maturity (Lm), the size that provides maximum cohort biomass (Lopt) and the abundance of mega-spawners. The mean value of growth parameters L∞, K and the growth performance index ø' were 44.5 cm, 0.41/year and 2.90 for O. niloticus, 74.1 cm, 0.28/year and 3.19 for C. carpio and 121.9 cm, 0.16/year and 3.36 for C. gariepinus, respectively. The 95 % confidence intervals of the estimates were also computed. Total mortality (Z) estimates were 1.47, 0.83 and 0.72/year for O. niloticus, C. carpio and C. gariepinus, respectively. Our study suggest that O. niloticus is in a healthy state, while C. gariepinus show signs of growth overfishing (when both exploitation rate (E) and size indicators were considered). In case of C. carpio, the low exploitation rate encountered would point to underfishing, while the size indicators of the catches would suggest that too small fish are harvested leading to growth overfishing. We concluded that fisheries production in Lake Koka could be enhanced by increasing E toward optimum level of exploitation (Eopt) for the underexploited C. carpio and by increasing the size at first capture (Lc) toward the Lopt, range for all target species.
NASA Astrophysics Data System (ADS)
Underhill, P. R.; Uemura, C.; Krause, T. W.
2018-04-01
Fatigue cracks are prone to develop around fasteners found in multi-layer aluminum structures on aging aircraft. Bolt hole eddy current (BHEC) is used for detection of cracks from within bolt holes after fastener removal. In support of qualification towards a target a90/95 (detect 90% of cracks of depth a, 95% of the time) of 0.76 mm (0.030"), a preliminary probability of detection (POD) study was performed to identify those parameters whose variation may keep a bolt hole inspection from attaining its goal. Parameters that were examined included variability in lift-off due to probe type, out-of-round holes, holes with diameters too large to permit surface-contact of the probe and mechanical damage to the holes, including burrs. The study examined the POD for BHEC of corner cracks in unfinished fastener holes extracted from service material. 68 EDM notches were introduced into two specimens of a horizontal stabilizer from a CC-130 Hercules aircraft. The fastener holes were inspected in the unfinished state, simulating potential inspection conditions, by 7 certified inspectors using a manual BHEC setup with an impedance plane display and also with one inspection conducted utilizing a BHEC automated C-Scan apparatus. While the standard detection limit of 1.27 mm (0.050") was achieved, given the a90/95 of 0.97 mm (0.039"), the target 0.76 mm (0.030") was not achieved. The work highlighted a number of areas where there was insufficient information to complete the qualification. Consequently, a number of recommendations were made. These included; development of a specification for minimum probe requirements; criteria for condition of the hole to be inspected, including out-of-roundness and presence of corrosion pits; statement of range of hole sizes; inspection frequency and data display for analysis.
Environmental and Occupational Pesticide Exposure and Human Sperm Parameters: A Systematic Review
Martenies, Sheena E.; Perry, Melissa J.
2013-01-01
Of continuing concern are the associations between environmental or occupational exposures to pesticides and semen quality parameters. Prior research has indicated that there may be associations between exposure to pesticides of a variety of classes and decreased sperm health. The intent of this review was to summarize the most recent evidence related to pesticide exposures and commonly used semen quality parameters, including concentration, motility and morphology. The recent literature was searched for studies published between January, 2007 and August, 2012 that focused on environmental or occupational pesticide exposures. Included in the review are 17 studies, 15 of which reported significant associations between exposure to pesticides and semen quality indicators. Two studies also investigated the roles genetic polymorphisms may play in the strength or directions of these associations. Specific pesticides targeted for study included dichlorodiphenyltrichloroethane (DDT), hexachlorocyclohexane (HCH), and abamectin. Pyrethroids and organophosphates were analyzed as classes of pesticides rather than as individual compounds, primarily due to the limitations of exposure assessment techniques. Overall, a majority of the studies reported significant associations between pesticide exposure and sperm parameters. A decrease in sperm concentration was the most commonly reported finding among all of the pesticide classes investigated. Decreased motility was also associated with exposures to each of the pesticide classes, although these findings were less frequent across studies. An association between pesticide exposure and sperm morphology was less clear, with only two studies reporting an association. The evidence presented in this review continues to support the hypothesis that exposures to pesticides at environmentally or occupationally relevant levels may be associated with decreased sperm health. Future work in this area should focus on associations between specific pesticides or metabolic products and sperm quality parameters. Analysis of effects of varying genetic characteristics, especially in genes related to pesticide metabolism, also needs further attention. PMID:23438386
Wearable sensors objectively measure gait parameters in Parkinson’s disease
Marxreiter, Franz; Gossler, Julia; Kohl, Zacharias; Reinfelder, Samuel; Gassner, Heiko; Aminian, Kamiar; Eskofier, Bjoern M.; Winkler, Jürgen; Klucken, Jochen
2017-01-01
Distinct gait characteristics like short steps and shuffling gait are prototypical signs commonly observed in Parkinson’s disease. Routinely assessed by observation through clinicians, gait is rated as part of categorical clinical scores. There is an increasing need to provide quantitative measurements of gait, e.g. to provide detailed information about disease progression. Recently, we developed a wearable sensor-based gait analysis system as diagnostic tool that objectively assesses gait parameter in Parkinson’s disease without the need of having a specialized gait laboratory. This system consists of inertial sensor units attached laterally to both shoes. The computed target of measures are spatiotemporal gait parameters including stride length and time, stance phase time, heel-strike and toe-off angle, toe clearance, and inter-stride variation from gait sequences. To translate this prototype into medical care, we conducted a cross-sectional study including 190 Parkinson’s disease patients and 101 age-matched controls and measured gait characteristics during a 4x10 meter walk at the subjects’ preferred speed. To determine intraindividual changes in gait, we monitored the gait characteristics of 63 patients longitudinally. Cross-sectional analysis revealed distinct spatiotemporal gait parameter differences reflecting typical Parkinson’s disease gait characteristics including short steps, shuffling gait, and postural instability specific for different disease stages and levels of motor impairment. The longitudinal analysis revealed that gait parameters were sensitive to changes by mirroring the progressive nature of Parkinson’s disease and corresponded to physician ratings. Taken together, we successfully show that wearable sensor-based gait analysis reaches clinical applicability providing a high biomechanical resolution for gait impairment in Parkinson’s disease. These data demonstrate the feasibility and applicability of objective wearable sensor-based gait measurement in Parkinson’s disease reaching high technological readiness levels for both, large scale clinical studies and individual patient care. PMID:29020012
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-01-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called ‘rain’. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based ‘experience matrix’ that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. PMID:27077048
Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven
2016-03-01
Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.
Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting
ERIC Educational Resources Information Center
Kruschke, John K.
2006-01-01
A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to back-propagate the target data to interior modules, such that an interior component's target is the input to the next component that maximizes the probability of the next component's target. Each layer…
Abedi, Ebrahim; Ebrahimkhani, Marzieh; Davari, Amin; Mirvakili, Seyed Mohammad; Tabasi, Mohsen; Maragheh, Mohammad Ghannadi
2016-12-01
Efficient and safe production of molybdenum-99 ( 99 Mo) radiopharmaceutical at Tehran Research Reactor (TRR) via fission of LEU targets is studied. Neutronic calculations are performed to evaluate produced 99 Mo activity, core neutronic safety parameters and also the power deposition values in target plates during a 7 days irradiation interval. Thermal-hydraulic analysis has been also carried out to obtain thermal behavior of these plates. Using Thermal-hydraulic analysis, it can be concluded that the safety parameters are satisfied in the current study. Consequently, the present neutronic and thermal-hydraulic calculations show efficient 99 Mo production is accessible at significant activity values in TRR current core configuration. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Torrisi, Lorenzo
2018-01-01
Measurements of ion acceleration in plasma produced by fs lasers at intensity of the order of 1018 W/cm2 have been performed in different European laboratories. The forward emission in target-normal-sheath-acceleration (TNSA) regime indicated that the maximum energy is a function of the laser parameters, of the irradiation conditions and of the target properties.In particular the laser intensity and contrast play an important role to maximize the ion acceleration enhancing the conversion efficiency. Also the use of suitable prepulses, focal distances and polarized laser light has important roles. Finally the target composition, surface, geometry and multilayered structure, permit to enhance the electric field driving the forward ion acceleration.Experimental measurements will be reported and discussed.
Ultrasound-guided three-dimensional needle steering in biological tissue with curved surfaces
Abayazid, Momen; Moreira, Pedro; Shahriari, Navid; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak
2015-01-01
In this paper, we present a system capable of automatically steering a bevel-tipped flexible needle under ultrasound guidance toward a physical target while avoiding a physical obstacle embedded in gelatin phantoms and biological tissue with curved surfaces. An ultrasound pre-operative scan is performed for three-dimensional (3D) target localization and shape reconstruction. A controller based on implicit force control is developed to align the transducer with curved surfaces to assure the maximum contact area, and thus obtain an image of sufficient quality. We experimentally investigate the effect of needle insertion system parameters such as insertion speed, needle diameter and bevel angle on target motion to adjust the parameters that minimize the target motion during insertion. A fast sampling-based path planner is used to compute and periodically update a feasible path to the target that avoids obstacles. We present experimental results for target reconstruction and needle insertion procedures in gelatin-based phantoms and biological tissue. Mean targeting errors of 1.46 ± 0.37 mm, 1.29 ± 0.29 mm and 1.82 ± 0.58 mm are obtained for phantoms with inclined, curved and combined (inclined and curved) surfaces, respectively, for insertion distance of 86–103 mm. The achieved targeting errors suggest that our approach is sufficient for targeting lesions of 3 mm radius that can be detected using clinical ultrasound imaging systems. PMID:25455165
NASA Astrophysics Data System (ADS)
Bishop, Steven S.; Moore, Timothy R.; Gugino, Peter; Smith, Brett; Kirkwood, Kathryn P.; Korman, Murray S.
2018-04-01
High Bandwidth Acoustic Detection System (HBADS) is an emerging active acoustic sensor technology undergoing study by the US Army's Night Vision and Electronic Sensors Directorate. Mounted on a commercial all-terrain type vehicle, it uses a single source pulse chirp while moving and a new array (two rows each containing eight microphones) mounted horizontally and oriented in a side scan mode. Experiments are performed with this synthetic aperture air acoustic (SAA) array to image canonical ground targets in clutter or foliage. A commercial audio speaker transmits a linear FM chirp having an effective frequency range of 2 kHz to 15 kHz. The system includes an inertial navigation system using two differential GPS antennas, an inertial measurement unit and a wheel coder. A web camera is mounted midway between the two horizontal microphone arrays and a meteorological unit acquires ambient, temperature, pressure and humidity information. A data acquisition system is central to the system's operation, which is controlled by a laptop computer. Recent experiments include imaging canonical targets located on the ground in a grassy field and similar targets camouflaged by natural vegetation along the side of a road. A recent modification involves implementing SAA stripmap mode interferometry for computing the reflectance of targets placed along the ground. Typical strip map SAA parameters are chirp pulse = 10 or 40 ms, slant range resolution c/(2*BW) = 0.013 m, microphone diameter D = 0.022 m, azimuthal resolution (D/2) = 0.01, air sound speed c ≍ 340 m/s and maximum vehicle speed ≍ 2 m/s.
A system study for the application of microcomputers to research flight test techniques
NASA Technical Reports Server (NTRS)
Smyth, R. K.
1983-01-01
The onboard simulator is a three degree of freedom aircraft behavior simulator which provides parameters used by the interception procedure. These parameters can be used for verifying closed loop performance before flight. The air to air intercept mode is a software package integrated in the simulation process that generates a target motion and performs a tracking procedure that predicts the most likely next target position, for a defined time step. This procedure also updates relative position parameters and gives adequate fire commands. A microcomputer based on an aircraft spin warning system periodically samples the assymetric thrust and yaw rate of an airplane and then issues voice synthesized warnings and /or suggests to the ilot how to respond to the situation.
Relativistic effects in electron impact ionization from the p-orbital
NASA Astrophysics Data System (ADS)
Haque, A. K. F.; Uddin, M. A.; Basak, A. K.; Karim, K. R.; Saha, B. C.; Malik, F. B.
2006-06-01
The parameters of our recent modification of BELI formula (MBELL) [A.K.F. Haque, M.A. Uddin, A.K. Basak, K.R. Karim, B.C. Saha, Phys. Rev. A 73 (2006) 012708] are generalized in terms of the orbital quantum numbers nl to evaluate the electron impact ionization (EII) cross sections of a wide range of isoelectronic targets (H to Ne series) and incident energies. For both the open and closed p-shell targets, the present MBELL results with a single parameter set, agree nicely with the experimental cross sections. The relativistic effect of ionization in the 2p subshell of U82+ for incident energies up to 250 MeV is well accounted for by the prescribed parameters of the model.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
NASA Astrophysics Data System (ADS)
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
The next generation in aircraft protection against advanced MANPADS
NASA Astrophysics Data System (ADS)
Chapman, Stuart
2014-10-01
This paper discusses the advanced and novel technologies and underlying systems capabilities that Selex ES has applied during the development, test and evaluation of the twin head Miysis DIRCM System in order to ensure that it provides the requisite levels of protection against the latest, sophisticated all-aspect IR MANPADS. The importance of key performance parameters, including the fundamental need for "spherical" coverage, rapid time to energy-on-target, laser tracking performance and radiant intensity on seeker dome is covered. It also addresses the approach necessary to ensure that the equipment is suited to all air platforms from the smallest helicopters to large transports, while also ensuring that it achieves an inherent high reliability and an ease of manufacture and repair such that a step change in through-life cost in comparison to previous generation systems can be achieved. The benefits and issues associated with open architecture design are also considered. Finally, the need for extensive test and evaluation at every stage, including simulation, laboratory testing, platform and target dynamic testing in a System Integration Laboratory (SIL), flight trial, missile live-fire, environmental testing and reliability testing is also described.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-01-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953
A Parametric Study on Using Active Debris Removal for LEO Environment Remediation
NASA Technical Reports Server (NTRS)
2010-01-01
Recent analyses on the instability of the orbital debris population in the low Earth orbit (LEO) region and the collision between Iridium 33 and Cosmos 2251 have reignited the interest in using active debris removal (ADR) to remediate the environment. There are; however, monumental technical, resource, operational, legal, and political challenges in making economically viable ADR a reality. Before a consensus on the need for ADR can be reached, a careful analysis of its effectiveness must be conducted. The goal is to demonstrate the need and feasibility of using ADR to better preserve the future environment and to guide its implementation to maximize the benefit-to-cost ratio. This paper describes a new sensitivity study on using ADR to stabilize the future LEO debris environment. The NASA long-term orbital debris evolutionary model, LEGEND, is used to quantify the effects of several key parameters, including target selection criteria/constraints and the starting epoch of ADR implementation. Additional analyses on potential ADR targets among the currently existing satellites and the benefits of collision avoidance maneuvers are also included.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-27
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkpatrick, R. C.
Nuclear fusion was discovered experimentally in 1933-34 and other charged particle nuclear reactions were documented shortly thereafter. Work in earnest on the fusion ignition problem began with Edward Teller's group at Los Alamos during the war years. His group quantified all the important basic atomic and nuclear processes and summarized their interactions. A few years later, the success of the early theory developed at Los Alamos led to very successful thermonuclear weapons, but also to decades of unsuccessful attempts to harness fusion as an energy source of the future. The reasons for this history are many, but it seems appropriatemore » to review some of the basics with the objective of identifying what is essential for success and what is not. This tutorial discusses only the conditions required for ignition in small fusion targets and how the target design impacts driver requirements. Generally speaking, the driver must meet the energy, power and power density requirements needed by the fusion target. The most relevant parameters for ignition of the fusion fuel are the minimum temperature and areal density (rhoR), but these parameters set secondary conditions that must be achieved, namely an implosion velocity, target size and pressure, which are interrelated. Despite the apparent simplicity of inertial fusion targets, there is not a single mode of fusion ignition, and the necessary combination of minimum temperature and areal density depends on the mode of ignition. However, by providing a magnetic field of sufficient strength, the conditions needed for fusion ignition can be drastically altered. Magnetized target fusion potentially opens up a vast parameter space between the extremes of magnetic and inertial fusion.« less
A review of international pharmacy-based minor ailment services and proposed service design model.
Aly, Mariyam; García-Cárdenas, Victoria; Williams, Kylie; Benrimoj, Shalom I
2018-01-05
The need to consider sustainable healthcare solutions is essential. An innovative strategy used to promote minor ailment care is the utilisation of community pharmacists to deliver minor ailment services (MASs). Promoting higher levels of self-care can potentially reduce the strain on existing resources. To explore the features of international MASs, including their similarities and differences, and consider the essential elements to design a MAS model. A grey literature search strategy was completed in June 2017 to comply with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses standard. This included (1) Google/Yahoo! search engines, (2) targeted websites, and (3) contact with commissioning organisations. Executive summaries, table of contents and title pages of documents were reviewed. Key characteristics of MASs were extracted and a MAS model was developed. A total of 147 publications were included in the review. Key service elements identified included eligibility, accessibility, staff involvement, reimbursement systems. Several factors need to be considered when designing a MAS model; including contextualisation of MAS to the market. Stakeholder engagement, service planning, governance, implementation and review have emerged as key aspects involved with a design model. MASs differ in their structural parameters. Consideration of these parameters is necessary when devising MAS aims and assessing outcomes to promote sustainability and success of the service. Copyright © 2018 Elsevier Inc. All rights reserved.
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
NASA Technical Reports Server (NTRS)
Park, Brooke Anderson; Wright, Henry
2012-01-01
PatCon code was developed to help mission designers run trade studies on launch and arrival times for any given planet. Initially developed in Fortran, the required inputs included launch date, arrival date, and other orbital parameters of the launch planet and arrival planets at the given dates. These parameters include the position of the planets, the eccentricity, semi-major axes, argument of periapsis, ascending node, and inclination of the planets. With these inputs, a patched conic approximation is used to determine the trajectory. The patched conic approximation divides the planetary mission into three parts: (1) the departure phase, in which the two relevant bodies are Earth and the spacecraft, and where the trajectory is a departure hyperbola with Earth at the focus; (2) the cruise phase, in which the two bodies are the Sun and the spacecraft, and where the trajectory is a transfer ellipse with the Sun at the focus; and (3) the arrival phase, in which the two bodies are the target planet and the spacecraft, where the trajectory is an arrival hyperbola with the planet as the focus.
Implantable biomedical devices on bioresorbable substrates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, John A.; Kim, Dae-Hyeong; Omenetto, Fiorenzo
Provided herein are implantable biomedical devices and methods of administering implantable biomedical devices, making implantable biomedical devices, and using implantable biomedical devices to actuate a target tissue or sense a parameter associated with the target tissue in a biological environment.
Kehrl, W; Sonnemann, U
2000-03-01
The aim of this study was the examination of efficacy and tolerability of an application-form of the new combination of Xylometazoline with Dexpanthenol (Nasic) versus Xylometazoline alone. Randomized verum controlled parallel-group-comparison of two weeks treatment of a nasal-spray. 61 inpatients with the diagnosis Rhinitis following nasal operation were included in this study and 30 patients were treated with verum and placebo each. The assessment of nasal-breathing-resistance according to scores was defined as target-parameter. Confirmatory statistical analysis was carried out according to Wilcoxon-Mann-Whitney-U two-sided at alpha < or = 0.05. The superiority of the combination of Xylometazoline-Dexpanthenol nasal-spray versus Xylometazoline nasal spray could be proven for the target-parameter as clinically relevant and statistically significant. The clinically proven efficacy is emphasized by good tolerability of both treatments. Due to easy handling of the nasal-spray a good compliance was confirmed. Distinct improvement of symptoms in patients following nasal operations underlines the efficacy of both medications. With respect to the tolerability therapy with the combination is more beneficial in comparison to the alternative therapy. The result of this controlled clinical study confirms that the combination Xylometazoline-Dexpanthenol is an enlargement and improvement of effective medicinal treatment of rhinitis following nasal operation in comparison to therapy with Xylometazoline alone.
NASA Technical Reports Server (NTRS)
Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid
2012-01-01
Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.
NASA Astrophysics Data System (ADS)
Giesbrecht, K. E.; Miller, L. A.; Davelaar, M.; Zimmermann, S.; Carmack, E.; Johnson, W. K.; Macdonald, R. W.; McLaughlin, F.; Mucci, A.; Williams, W. J.; Wong, C. S.; Yamamoto-Kawai, M.
2014-03-01
We have assembled and conducted primary quality control on previously publicly unavailable water column measurements of the dissolved inorganic carbon system and associated biogeochemical parameters (oxygen, nutrients, etc.) made on 26 cruises in the subarctic and Arctic regions dating back to 1974. The measurements are primarily from the western side of the Canadian Arctic, but also include data that cover an area ranging from the North Pacific to the Gulf of St. Lawrence. The data were subjected to primary quality control (QC) to identify outliers and obvious errors. This data set incorporates over four thousand individual measurements of total inorganic carbon (TIC), alkalinity, and pH from the Canadian Arctic over a period of more than 30 years and provides an opportunity to increase our understanding of temporal changes in the inorganic carbon system in northern waters and the Arctic Ocean. The data set is available for download on the CDIAC (Carbon Dioxide Information Analysis Center) website: http://cdiac.ornl.gov/ftp/oceans/IOS_Arctic_Database/ (doi:10.3334/CDIAC/OTG.IOS_ARCT_CARBN).
Third COS FUV Lifetime Position: FUV Target Acquisition Parameter Update {LENA3}
NASA Astrophysics Data System (ADS)
Penton, Steven
2013-10-01
Verify the ability of the Cycle 22 COS FSW to place an isolated point source at the center of the PSA, using FUV dispersed light target acquisition (TA) from the object and all three FUV gratings at the Third Lifetime Position (LP3). This program is modeled from the activity summary of LENA3.This program should be executed after the LP3 HV, XD spectral positions, aperture mechanism position, and focus are determined and updated. In addition, initial estimates of the LIFETIME=ALTERNATE TA FSW parameters and subarrays should be updated prior to execution of this program. After Visit 01, the subarrays will be updated. After Visit 2, the FUV WCA-to-PSA offsets will be updateded. Prior to Visit 6, LV56 will be installed will include new values for the LP3 FUV plate scales. VISIT 6 exposures use the default lifetime position (LP3).NUV imaging TAs have previously been used to determine the correct locations for FUV spectra. We follow the same procedure here.Note that the ETC runs here were made using ETC22.2 and are therefore valid for Mach 2014. Some TDS drop will likely have occured before these visits execute, but we have plenty of count to go what we need to do in this program.
One-dimensional MHD simulations of MTF systems with compact toroid targets and spherical liners
NASA Astrophysics Data System (ADS)
Khalzov, Ivan; Zindler, Ryan; Barsky, Sandra; Delage, Michael; Laberge, Michel
2017-10-01
One-dimensional (1D) MHD code is developed in General Fusion (GF) for coupled plasma-liner simulations in magnetized target fusion (MTF) systems. The main goal of these simulations is to search for optimal parameters of MTF reactor, in which spherical liquid metal liner compresses compact toroid plasma. The code uses Lagrangian description for both liner and plasma. The liner is represented as a set of spherical shells with fixed masses while plasma is discretized as a set of nested tori with circular cross sections and fixed number of particles between them. All physical fields are 1D functions of either spherical (liner) or small toroidal (plasma) radius. Motion of liner and plasma shells is calculated self-consistently based on applied forces and equations of state. Magnetic field is determined by 1D profiles of poloidal and toroidal fluxes - they are advected with shells and diffuse according to local resistivity, this also accounts for flux leakage into the liner. Different plasma transport models are implemented, this allows for comparison with ongoing GF experiments. Fusion power calculation is included into the code. We performed a series of parameter scans in order to establish the underlying dependencies of the MTF system and find the optimal reactor design point.
Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.
Borah, Deva K; Voelz, David G
2007-08-10
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.
Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic
Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.
2008-01-01
Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.
Anatomical Parameters of tDCS to Modulate the Motor System after Stroke: A Review
Lefebvre, Stephanie; Liew, Sook-Lei
2017-01-01
Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation method to modulate the local field potential in neural tissue and consequently, cortical excitability. As tDCS is relatively portable, affordable, and accessible, the applications of tDCS to probe brain–behavior connections have rapidly increased in the last 10 years. One of the most promising applications is the use of tDCS to modulate excitability in the motor cortex after stroke and promote motor recovery. However, the results of clinical studies implementing tDCS to modulate motor excitability have been highly variable, with some studies demonstrating that as many as 50% or more of patients fail to show a response to stimulation. Much effort has therefore been dedicated to understand the sources of variability affecting tDCS efficacy. Possible suspects include the placement of the electrodes, task parameters during stimulation, dosing (current amplitude, duration of stimulation, frequency of stimulation), individual states (e.g., anxiety, motivation, attention), and more. In this review, we first briefly review potential sources of variability specific to stroke motor recovery following tDCS. We then examine how the anatomical variability in tDCS placement [e.g., neural target(s) and montages employed] may alter the neuromodulatory effects that tDCS exerts on the post-stroke motor system. PMID:28232816
Sweep Width Determination for HU-25B Airborne Radars: Life Raft and Recreational Boat Targets
1989-09-01
in this case ), instead of to multiple parameters of interest. The effects of other parameters cannot be identified or quantified. 2 . The binary...rougher than those represented in the main body of data, averaging 9 to 10 feet (see section 1.3.6). Table 2 -1. Number of Searcher/Target Interactions...1987 experiment. 2 -1 2.2 DETECTION PERFORMANCE Sections 2.2.1 through 2.2.3 present results of the AN/APS-127 FLAR, AN/APS-131 SLAR, and combined FLAK
NASA Astrophysics Data System (ADS)
Kouhpeima, A.; Feiznia, S.; Ahmadi, H.; Hashemi, S. A.; Zareiee, A. R.
2010-09-01
The targeting of sediment management strategies is a key requirement in developing countries including Iran because of the limited resources available. These targeting is, however hampered by the lack of reliable information on catchment sediment sources. This paper reports the results of using a quantitative composite fingerprinting technique to estimate the relative importance of the primary potential sources within the Amrovan and Royan catchments in Semnan Province, Iran. Fifteen tracers were first selected for tracing and samples were analyzed in the laboratory for these parameters. Statistical methods were applied to the data including nonparametric Kruskal-Wallis test and Differentiation Function Analysis (DFA). For Amrovan catchment three parameters (N, Cr and Co) were found to be not significant in making the discrimination. The optimum fingerprint, comprising Oc, PH, Kaolinite and K was able to distinguish correctly 100% of the source material samples. For the Royan catchment, all of the 15 properties were able to distinguish between the six source types and the optimum fingerprint provided by stepwise DFA (Cholorite, XFD, N and C) correctly classifies 92.9% of the source material samples. The mean contributions from each sediment source obtained by multivariate mixing model varied at two catchments. For Amrovan catchment Upper Red formation is the main sediment sources as this sediment source approximately supplies 36% of the reservoir sediment whereas the dominant sediment source for the Royan catchment is from Karaj formation that supplies 33% of the reservoir sediments. Results indicate that the source fingerprinting approach appears to work well in the study catchments and to generate reliable results.
Inami, Takumi; Kataoka, Masaharu; Shimura, Nobuhiko; Ishiguro, Haruhisa; Yanagisawa, Ryoji; Taguchi, Hiroki; Fukuda, Keiichi; Yoshino, Hideaki; Satoh, Toru
2013-07-01
This study sought to identify useful predictors for hemodynamic improvement and risk of reperfusion pulmonary edema (RPE), a major complication of this procedure. Percutaneous transluminal pulmonary angioplasty (PTPA) has been reported to be effective for the treatment of chronic thromboembolic pulmonary hypertension (CTEPH). PTPA has not been widespread because RPE has not been well predicted. We included 140 consecutive procedures in 54 patients with CTEPH. The flow appearance of the target vessels was graded into 4 groups (Pulmonary Flow Grade), and we proposed PEPSI (Pulmonary Edema Predictive Scoring Index) = (sum total change of Pulmonary Flow Grade scores) × (baseline pulmonary vascular resistance). Correlations between occurrence of RPE and 11 variables, including hemodynamic parameters, number of target vessels, and PEPSI, were analyzed. Hemodynamic parameters significantly improved after median observation period of 6.4 months, and the sum total changes in Pulmonary Flow Grade scores were significantly correlated with the improvement in hemodynamics. Multivariate analysis revealed that PEPSI was the strongest factor correlated with the occurrence of RPE (p < 0.0001). Receiver-operating characteristic curve analysis demonstrated PEPSI to be a useful marker of the risk of RPE (cutoff value 35.4, negative predictive value 92.3%). Pulmonary Flow Grade score is useful in determining therapeutic efficacy, and PEPSI is highly supportive to reduce the risk of RPE after PTPA. Using these 2 indexes, PTPA could become a safe and common therapeutic strategy for CTEPH. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Ismail, Md.; Hossain, Md. Faruk; Tanu, Arifur Rahman
2015-01-01
Background and Objective. Oxidative stress is intimately associated with many diseases, including chronic obstructive pulmonary disease (COPD). Study objectives include a comparison of the oxidative stress, antioxidant status, and lipid profile between COPD patients and controls and evaluation of the effect of spirulina intervention on oxidative stress, antioxidant status, and lipid profile of COPD patients. Methods. 30 patients with COPD and 20 controls with no respiratory problems were selected. Global Initiative for Chronic Obstructive Lung Disease criteria were served as the basis of COPD diagnosis. The serum content of malondialdehyde (MDA), lipid hydroperoxide, glutathione (GSH), vitamin C, cholesterol, triglyceride (TG), and high density lipoprotein (HDL) was measured. The activity of superoxide dismutase (SOD), catalase (CAT), and glutathione-s-transferase (GST) was also measured. Two different doses, (500 × 2) mg and (500 × 4) mg spirulina, were given to two groups, each of which comprises 15 COPD patients. Results. All targeted blood parameters have significant difference (P = 0.000) between COPD patients and controls except triglyceride (TG). Spirulina intake for 30 and 60 days at (500 × 2) mg dose has significantly reduced serum content of MDA, lipid hydroperoxide, and cholesterol (P = 0.000) while increasing GSH, Vit C level (P = 0.000), and the activity of SOD (P = 0.000) and GST (P = 0.038). At the same time, spirulina intake for 30 and 60 days at (500 × 4) mg dose has favorable significant effect (P = 0.000) on all targeted blood parameters except for HDL (P = 0.163). PMID:25685791
The GALAH Survey: Second Data Release
NASA Astrophysics Data System (ADS)
Buder, Sven; Asplund, Martin; Duong, Ly; Kos, Janez; Lind, Karin; Ness, Melissa K.; Sharma, Sanjib; Bland-Hawthorn, Joss; Casey, Andrew R.; De Silva, Gayandhi M.; D'Orazi, Valentina; Freeman, Ken C.; Lewis, Geraint F.; Lin, Jane; Martell, Sarah L.; Schlesinger, Katharine J.; Simpson, Jeffrey D.; Zucker, Daniel B.; Zwitter, Tomaž; Amarsi, Anish M.; Anguiano, Borja; Carollo, Daniela; Casagrande, Luca; Čotar, Klemen; Cottrell, Peter L.; Da Costa, Gary; Gao, Xudong D.; Hayden, Michael R.; Horner, Jonathan; Ireland, Michael J.; Kafle, Prajwal R.; Munari, Ulisse; Nataf, David M.; Nordlander, Thomas; Stello, Dennis; Ting, Yuan-Sen; Traven, Gregor; Watson, Fred; Wittenmyer, Robert A.; Wyse, Rosemary F. G.; Yong, David; Zinn, Joel C.; Žerjal, Maruša
2018-05-01
The Galactic Archaeology with HERMES (GALAH) survey is a large-scale stellar spectroscopic survey of the Milky Way, designed to deliver complementary chemical information to a large number of stars covered by the Gaia mission. We present the GALAH second public data release (GALAH DR2) containing 342,682 stars. For these stars, the GALAH collaboration provides stellar parameters and abundances for up to 23 elements to the community. Here we present the target selection, observation, data reduction and detailed explanation of how the spectra were analysed to estimate stellar parameters and element abundances. For the stellar analysis, we have used a multi-step approach. We use the physics-driven spectrum synthesis of Spectroscopy Made Easy (SME) to derive stellar labels (Teff, log g, [Fe/H], [X/Fe], vmic, vsin i, A_{K_S}) for a representative training set of stars. This information is then propagated to the whole sample with the data-driven method of The Cannon. Special care has been exercised in the spectral synthesis to only consider spectral lines that have reliable atomic input data and are little affected by blending lines. Departures from local thermodynamic equilibrium (LTE) are considered for several key elements, including Li, O, Na, Mg, Al, Si, and Fe, using 1D MARCS stellar atmosphere models. Validation tests including repeat observations, Gaia benchmark stars, open and globular clusters, and K2 asteroseismic targets lend confidence to our methods and results. Combining the GALAH DR2 catalogue with the kinematic information from Gaia will enable a wide range of Galactic Archaeology studies, with unprecedented detail, dimensionality, and scope.
NASA Astrophysics Data System (ADS)
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
Sonar Detection and Classification of Underwater UXO and Environmental Parameters
2012-07-09
For these targets, representations like those in Figs. 10 and 11 may be more useful because they focus on properties of the isolated target signal ... using time-frequency phenomena extracted from strong ROIs in target scattering data. In general, backscattered signals contain specular as well as...database of sonar target signals useful for developing and evaluating C/ID algorithms that separate UXO from bottom clutter and to look for and