Positron-emitting radionuclides have long been employed for diagnostic imaging, with PET scans using fluorine-18 (18F)-labelled fluorodeoxyglucose (FDG) playing an essential role in cancer diagnosis. But positrons could also be used to destroy cancer cells. Perhaps due to their prevalence within diagnostics, this therapeutic potential has to date been largely overlooked. A research team in Australia aims to address this oversight.
The researchers, from the University of Sydney, Royal North Shore Hospital and the Sydney Vital Translational Cancer Research Centre, demonstrated the first in vitro evidence of the therapeutic potential of positrons on prostate cancer cells. They also derived the radiobiological parameters for 18F positron emission, reporting their findings in Scientific Reports.
“We refer to it as positron emission radionuclide therapy, or PERT,” says senior author Dale Bailey.
When the radionuclide 18F undergoes decay it emits a positron (a beta-plus particle emitted from a proton-rich nucleus). The positron will ultimately annihilate with an electron, leading to the emission of two 0.511 MeV photons. And it is these photons that are detected to create PET images.
But before this final annihilation process, the positrons lose kinetic energy in discrete quantities (roughly 100 eV) via multiple interactions along their track, creating positron “spurs” – nano-sized spheres of electron/positive-ion pairs – and a terminal positron “blob”. These spurs and blobs are all sources of highly reactive species and deliver a relatively large radiation dose when they interact with biomolecules such as DNA.
To investigate the potential of positrons in cancer medicine, the researchers examined the survival of prostate cancer cells exposed to sodium fluoride (18F-NaF) solution for 18 h. They found that a dose of 20 Gy 18F positrons killed over 90% of the cells, while a 10 Gy dose caused 70% cell kill.
To quantify the relative biological effectiveness (RBE) of 18F positrons, the researchers compared their results with high-dose rate X-ray irradiation. They assessed cell survival at various absorbed doses for positrons and for X-rays from a small-animal radiation research platform (SARRP). By comparing the mean absorbed doses required for 50% cell survival, they calculated a mean RBE of 0.42 for 18F positrons relative to SARRP irradiation. This is three times higher than the RBE of radionuclides that emit beta-minus particles (electrons emitted from a neutron-rich nucleus), such as 90Y and 177Lu.
“Clinically speaking, the dose rate and linear energy transfer (LET) of positron emitters is expected to be higher than that of most beta-minus emitters that are currently used in radionuclide therapy, predominantly due to the relatively shorter half-life and more ionizing radiation of many positron emitters,” explains first author Takanori Hioki. “Furthermore, radionuclide therapy targets metastatic lesions, while external-beam radiotherapy is generally used for larger sites or primary lesions.”
Hioki and colleagues also performed a Monte Carlo simulation of a linear DNA model to determine the frequency of DNA single strand breaks (SSBs) and double strand breaks (DSBs) caused by positron or beta-minus irradiation at kinetic energies from 250 eV to 1.5 keV. They observed that the lower energies produced larger numbers of SSBs and DSBs.
The simulation revealed that positron tracks induce 1.5- and 2.2-fold more SSBs and DSBs, respectively, than beta-minus tracks. The greatest difference occurred at 400 eV, where positrons caused 55% increase in SSBs and 117% increase in DSBs compared with beta-minus particles.
These results imply that the direct interaction of a single positron with DNA should create more lethal damage than caused by a single beta-minus. “As each positron that is emitted loses energy as it interacts, an accumulation of the simulated interactions causes the total damage that we observe in the in vitro experiment,” Hioki notes.
Plotting the LET (a measure of how much energy an ionizing particle deposits per unit path length) revealed that maximum SSB and DSB production should occur at 250 eV (the kinetic energy near the end of its track) for both positrons and electrons. At this energy, a positron has roughly 7% higher LET than a beta-minus.
The spur model suggests that beta-minuses and positrons initially have similar radiation tracks, but behave differently at the lowest energies. For a sub-keV beta-minus, the mean separation between spurs is 20 times larger than the diameter of the DNA helix, while a sub-keV positron continuously forms spurs and builds up a blob along and at the end of its track. Thus, at sub-keV energies, a positron has a higher LET than a beta-minus. Additional ionization at the terminal annihilation event further increases the total number of ionizations per positron track.
“The biggest contribution to the higher DNA damage from positrons in comparison to beta-minuses is the higher LET of the particles at sub-keV energies, as well as contributions from the difference in charge,” says Hioki.
The researchers point out that, in addition to an untapped therapeutic potential, positron-emitting radionuclides could also play a role in emerging theranostic strategies, for use as a combined therapeutic and diagnostic agent. For clinical use, however, the highly penetrating, low-ionizing nature of the emitted annihilation photons will require careful safety considerations when administering therapeutic doses of the radioactive compound.
“As this study demonstrated the therapeutic potential of positrons, we are currently working on the next step – to optimize the administered activities to maximize its treatment efficacy,” Hioki tells Physics World. “We are also performing biological assays to demonstrate the impact that positrons have on the cellular mechanisms that lead to the results we observed in the cell survival assays.”