This week the American Astronomical Society is meeting in Washington, DC. At the conference it was announced yesterday that a citizen-scientist project called Exoplanet Explorers had used data from the Kepler mission to detect a new five-planet system.
The 27 authors include, among others, the astronomer and broadcaster Chris Lintott and the particle physicist and broadcaster Brian Cox. Exoplanet Explorers was featured prominently on the Australian TV show Stargazing Live in April and another author on the paper is the Australian TV presenter Julia Zemiro, who is affiliated with the Australian Broadcasting Corporation. You can read the paper here.
Still on space…after just three weeks on the International Space Station, the Japanese astronaut Norishige Kanai took to Twitter to note that he had grown by a massive 9 cm. It is usual for astronauts to grow slightly in space as the spine elongates in the microgravity environment, but it is usually only between 2 and 5 cm.
His fellow astronauts seemed a bit dubious and a quick measurement showed that he actually grew 2 cm instead. “I must apologize for this terrible fake news,” noted Kanai.
And finally, you might remember the viral-internet craze in 2016 in which people repeatedly tried flipping a partially filled bottle of water in the air to see if they could get it to land perfectly upright. At first sight, it appears quite improbable that the bottle would land in this way, but now physicists in the Netherlands have come to the rescue and worked out the physics behind the phenomenon.
By conducting experiments and devising a suitable model, they show that the redistribution of the water in the bottle as it rotates through the air increases the moment of inertia. However, by the conservation of angular momentum, this increase actually reduces the rotational velocity of the bottle, enabling a nearly vertical descent (in most cases).
The researchers say that the effect can be a useful classroom aid, and to avoid water being sprayed everywhere, they recommend instead demonstrating it with two tennis balls inside the bottle.
Researchers in the US have developed nanosensors that can be directly inserted into a cell’s lipid membrane and be used to measure membrane potential. The devices, which are based on inorganic semiconductor nanoparticles, could potentially record action potentials from multiple neurons as well as electrical signals on the nanoscale – for example, across just one synapse.
Thanks to recent advances in inorganic colloidal synthesis, researchers can now make functional semiconductor nanoparticles whose size, shape and composition can be precisely controlled. Such nanoparticles can be used in applications as diverse as optoelectronics, biological imaging, sensing, catalysis and energy harvesting.
These nanomaterials can also be combined with biological cells to make highly sophisticated hybrid nanomaterials that outperform their purely biological counterparts. Until now, however, incorporating these particles into cell membranes has proved difficult. This is because they are often too big and have surface properties that can lead to non-specific binding on cell membranes. What is more, inserting nanoparticles into membrane bilayers is further complicated by the fact that their surfaces need to be functionalized so that the particles are inserted in the correct orientation.
Imparting membrane protein-like properties to nanoparticles
A team led by Shimon Weiss of the University of California, Los Angeles, says that imparting membrane protein-like properties to the nanoparticles might make it easier for them to target and insert into the lipid bilayer. This approach could be used to make membrane-embedded hybrid nanomaterials with useful functions. The researchers have now developed a way to do just this using rod-shaped nanoparticles and have also shown that the particles can be used to measure membrane potentials for the first time.
Weiss and colleagues employed a peptide-coating technique that they developed to make sure that the nanorods inserted themselves into the membranes in the correct direction – that is, perpendicular to the membrane surface. “This is important because rods inserted parallel to the membrane surface cannot detect membrane potentials across the membrane,” explains Weiss. “The coating technique itself involves adsorption of amphiphilic peptides with hydrophilic sequence segments aligning with the tips, and hydrophobic sequence segments aligning with the sides of the nanorods.”
Nanorods can sense membrane potential
The researchers, reporting their work in Science Advances 2018;4:e1601453, confirmed that the nanorods inserted into cell membranes in the right direction by imaging them with transmission electronic microscopy.
Once inserted, these nanorods can sense membrane potential thanks to the quantum-confined Stark effect with single-particle sensitivity. “With further improvements, these nanosensors could potentially be used for simultaneous recording of action potentials from multiple neurons in a large field of view over a long duration and for recording electrical signals on the nanoscale, such as across one synapse,” say the researchers.
They add that they will now be improving the peptide coating and membrane-insertion process.
The first ever “optical” diodes made from the 2D material Ti3C2 and fullerene (or carbon-60) retain their saturable absorption (or increased light transmission at higher laser powers) even after being exposed to air or low-energy plasma irradiation. This means that their optical properties are stable under these conditions, making them useful as optical isolation for high-power lasers and as saturable absorbers for Q-switching laser components that allow short pulses to be generated.
“In electronic circuits, devices such as electronic diodes allow the flow of electrons in just one direction (forward bias) but not in the other (reverse bias),” explains team leader Ramakrishna Podila of Clemson University in the US. “To achieve a similar diode action for photons is very challenging, however, because we know that light travels in both directions (if I see you then you see me). In our work, we have combined two materials (2D Ti3C2 MXene and fullerene, or C60) that have contrasting nonlinear optical properties to make a device that does allow for one-way transmission of light. In other words, we have made an optical diode.”
MXenes are a new class of 2D transition-metal carbides, nitrides and carbonitrides that interact with light in a unique way. Researchers recently found that 2D Ti3C2Tx (where Tx can be functional groups such as –OH and –F) has nonlinear saturable absorption that could be useful for mode locking in femtosecond lasers.
To better understand why these materials boast such a good property, Podila and colleagues decided to make Ti3C2Tx thin films of different thicknesses and systemically study their nonlinear optical properties.
Ti3C2Tx can withstand higher laser powers
“We made our Ti3C2Tx films using an ‘interfacial film’ formation technique in which we formed a thin layer of Ti3C2Tx at the interface of two immiscible liquids (toluene and water) and then transferred this film to a quartz substrate. We characterized its nonlinear optical properties using a pulsed nanosecond laser operating at 1064 nm using a method known as the Z-scan.”
The researchers say that Ti3C2Tx is more electrically conducting and more mechanically robust than other 2D materials, such as graphene. It is thus able to withstand higher laser powers. It also retains its saturable absorption even after being exposed to air or low-energy plasma irradiation, so the nonlinear properties of the material are stable under these conditions.
The team then made photodiodes by juxtaposing Ti3C2Tx MXene with C60 thin films. “We had already made a similar device in the past by directly coating C60 on graphene, so knew that this approach works,” Podila tells nanotechweb.org. “The most important feature of the new MXene-based photonic diode is that it is ‘passive’ and does not require active magnetic fields to operate (unlike existing Faraday rotors).”
Improving the non-reciprocity factor
One key application for the device would be in optical isolation for high-power lasers – given that Ti3C2Tx can withstand high laser powers, as mentioned, he says. It might also be used as a saturable absorber material for Q-switching laser components.
The team, reporting its work in Advanced Materials, says that it still has some way to go in its research. “For one, the non-reciprocity factor (that is, the degree of optical isolation, which is defined by the ratio of light transmission in forward versus reverse directions) for this device is only a modest 4dB,” explains Podila. “Our goal is to explore other MXene materials that may allow us to improve this factor by an order of magnitude. Devices made from these materials may then be able to compete with existing Faraday rotors.”
This work was performed in collaboration with Clemson Nanomaterials Institute (Apparao M Rao, Sriparna Bhattacharya, and Yongchang Dong), Drexel University (the Yury Gogotsi group) and Missouri University of Science and Technology (Vadym Mochalin’s group).
In clinical practice today, proton therapy is planned assuming a constant value of 1.1 for the relative biological effectiveness (RBE) of protons. RBE, however, is a complex variable that depends on many factors and – despite extensive research – there is no accepted variable RBE model available for use in clinical treatment planning. As such, the biological uncertainties of protons remain a major challenge for realizing the full potential of proton therapy.
One factor that affects RBE is the linear energy transfer (LET), with higher LET correlating to increased RBE. Thus it may be possible to modify the delivered RBE by varying the LET instead. With this aim, researchers from the MD Anderson Cancer Center have studied the impact of incorporating LET criteria directly into intensity-modulated proton therapy (IMPT) planning. Their goal: to increase LET in target volumes while reducing LET in critical structures compared with a conventional plan (Phys. Med. Biol.63 015013).
“Currently, IMPT treatment planning does not evaluate LET distributions specifically; however, LET is an important contributing factor to protons’ biological effectiveness besides physical dose,” explained first author Wenhua Cao. “A conventionally optimized IMPT plan could result in placing protons with high LETs in normal tissues instead of target volumes. This can be improved by LET optimization.”
Replanning results
Cao and colleagues examined five brain tumour patients who had been treated with proton therapy. All patients had one or more critical structures adjacent to or overlapping with target volumes. They created two IMPT plans for each patient, one using conventional dose-based optimization (DoseOpt) and the other using the proposed LET-incorporated optimization (LETOpt). All plans had the same target prescriptions and field arrangements as in the clinical treatments, and were tailored to produce similar dose distributions to those delivered clinically.
For each plan, the researchers determined a series of key indices for dose and dose-averaged LET (hereafter referred to as LET): dose and LET for 1% and 99% of the gross tumour volume (GTV); the maximum dose and LET for the brainstem; dose and LET exceeded in 0.1 cc of the brainstem; and maximum and mean dose and LET for the optic chiasm.
For those selected dose indices, they found only minor differences between the DoseOpt and LETOpt plans. The LET indices, however, showed pronounced differences. Maximum LET and LET to 0.1 cc of the brainstem were reduced by an average of 19.4% and 23.7%, respectively, from DoseOpt to LETOpt. The maximum and mean LETs for the optic chiasm, meanwhile, were reduced by 21.1% and 21.9%, respectively, while LETs for 1% and 99% of the GTV were increased by 27.2% and 18.4%.
Looking at an example glioblastoma case, dose distributions and dose–volume histograms confirmed that the doses generated by the two plans were comparable. LET distributions and LET–volume histograms verified that sparing of the brainstem and optic chiasm were significantly improved in the LETOpt plans. The researchers note that the LET increase in the GTV was less pronounced than the LET decrease in the organs-at-risk.
In another patient (an ependymoma case), the two plans again delivered similar doses, although the DoseOpt plan produced worse brainstem sparing in the low-dose region than the LETOpt plan. LETOpt greatly reduced LET hotspots in normal tissues and the brainstem, and produced plans with a larger high-LET area in the target than the DoseOpt plans.
For this patient, the researchers also examined how the ratio of optimization priority factors of the dose and LET objectives impacted the final plan. They created two LETOpt plans with this ratio set as one (plan 1) or 10 (plan 2). For plan 1, although the brainstem was not well spared at low doses by LETOpt, its exposure to high LETs was greatly reduced. In plan 2, dose sparing of the brainstem was similar for LETOpt and DoseOpt, but the benefit of LET sparing could not be achieved as it was in plan 1.
The trade-off between dose and LET objectives
While both LETOpt plans increased LET in target volumes, the size of this increase was less for plan 2 because higher optimization priority was given to dose. The authors suggest that, in the clinic, the choice between plan 1 and 2 should be determined by the physician’s preference with regard to different metrics (such as dose to the brainstem or boost in target dose, for example).
The results demonstrated that LET-incorporated IMPT optimization can increase dose-averaged LET in target volumes and reduce it in critical normal tissues, while maintaining satisfactory dose distributions. The team point out that clinical application of this method requires no changes to the current treatment protocols, and could be implemented immediately to benefit patients.
“We have treated only one patient so far with LET-optimized IMPT at our centre, but plan to treat an increasing number,” Cao told medicalphysicsweb. ” We are working to set up a protocol in order to use LET optimization systematically for IMPT. We believe the LET-optimized IMPT will benefit patients with high risk of radiation-induced toxicity for critical structures close to tumour targets.”
To improve the accuracy of proton therapy delivery, and therefore its effectiveness, the ability to verify proton range in the patient is highly beneficial. In vivo range verification would give confidence in reducing margins and be used as a trigger to detect any need for adaptation.
Prompt-gamma imaging (PGI) is one method being developed for in vivo range verification. An interdisciplinary team of researchers at OncoRay, the National Center for Radiation Research in Oncology in Dresden, successfully applied a prompt gamma camera developed by IBA in a proof-of-principle application in clinical treatments using double-scattered protons (see: Prompt gamma imaging goes clinical). A team at the University of Pennsylvania used a second such unit to record the first clinical treatments in pencil-beam scanning (PBS) mode (see First clinical prompt gamma imaging of PBS protons).
In a next step, Lena Nenoff from the translational research team at OncoRay has conducted a study to evaluate the sensitivity of the camera to detect different types of range shifts, beam delivery techniques and evaluation methods, in close-to-clinical scenarios. The researchers systematically assessed the performance of the slit camera in well-defined error scenarios using realistic treatment deliveries to an anthropomorphic head phantom (Radiother. Oncol.125 534).
Detecting deviations
Range shift deviations occur for a variety of reasons. They can be caused by changes in patient anatomy, intra- and inter-fractional motion, and/or setup errors during each fraction. Range shifts also occur due to uncertainties in the nominal range prediction, mostly resulting from uncertainty in the conversion of CT numbers into stopping behaviour of particles (stopping power ratio), and/or from uncertainties in the beam model.
The PGI slit camera projects the prompt-gamma distribution through a knife-edge slit collimator onto a segmented detector, a geometry optimized in collaboration with Université Libre de Bruxelles. This results in a one-dimensional spatially resolved prompt-gamma distribution measured by extremely fast electronics (designed by partners at Politecnico di Milano and XGLab).
The PGI slit camera.
For the study, the authors defined a clinical target volume (142 cm3) representative of a brain tumour in the temporal lobe in a head phantom. They developed three treatment plans: double-scattering; a single-field uniform dose; and intensity-modulated proton therapy. The three plans consisted of two equally-weighted fields, only one of which was monitored by the slit camera.
In experiments performed at OncoRay’s clinical proton facility, the researchers positioned the slit camera next to the phantom, with the collimator opening parallel to the beam. The field-of-view was approximately 10 cm along the beam axis and focused on the distal part of the target volume. They introduced various local and global range shifts of known magnitude.
The accuracy of the PGI simulation was better than 1 mm. Deviations from the treatment plans were detected with an accuracy of better than 2 mm in PBS by comparing the measured PGI information with the expected PGI signal from the simulation. The researchers concluded that detection of global and local range shifts with a PGI slit-camera is possible under simulated clinical conditions. PGI verification of PBS was superior to verification of double-scattering.
Experimental set-up for the sensitivity study
Co-author Christian Richter, the senior researcher in the translation slit camera project at OncoRay, told medicalphysicsweb that an open clinical trial to identify the clinical benefit of prompt-gamma based range verification is underway at the University Medical Center Klinik und Poliklinik für Strahlentherapie in Dresden. The first patients where the researchers applied the slit camera during double-scattering treatments are included in the study, but it will primarily include patients who will be monitored with the slit camera during PBS treatments. PBS is becoming clinically available at the clinic in mid-December 2017, and patients will be actively recruited to participate. The OncoRay and IBA team are also currently working on an update of the camera trolley design, with the aim of extending the applicability to more clinical indications.
When Paul Chu first reported superconductivity in YBa2Cu3Ox at the comparatively high temperature of 93 K in 1987, imaginations were struck by the implications as to how to understand superconductivity, and how to use it. Temperatures of 93 K can be reached with liquid nitrogen, which is easy to handle compared with the liquid helium needed for lower temperatures. However, the brittleness of YBa2Cu3Ox has been a stumbling block for a lot of potential applications. Now researchers in Japan have demonstrated superconductivity in a nanopowder of YBa2Cu3Ox, without the need for heat treatments that render the material brittle. As well as using the powder as a superconducting paint, they hope to find ways of exploiting the nanoscale morphologies in the powder to incorporate additional functionalities.
“Our materials don’t represent a new material, but a new way of thinking about superconductors and opens a route to designer superconductors, tailoring them for specific uses,” says William Rieken, who is researching with supervisor Hiroshi Daimon in the Surface and Materials Science Laboratory at Nara Institute of Science and Technology in Japan, and led on the latest developments.
Efforts to produce nanopowders of YBa2Cu3Ox followed close on the heels of Chu’s original discovery. In 1994 co-author Atit Bhargava and colleagues at the University of Queensland in Australia had already demonstrated a solution processing method for obtaining nanopowders with the right stoichiometry so that brief heat treatments rendered the material in a superconducting phase. However, the subsequent heat treatment left a brittle ceramic so that the agility of the powder form was lost.
Researchers have since demonstrated superconducting YBa2Cu3Ox in the form of nanostructures by templating on other structures such as carbon nanotubes and anodized aluminium oxide arrays, micromachining and electrospinning. However, Rieken and Bhargava believe the simplicity and rapidity of their process represents a gear change for the field.
Verification
The approach demonstrated by Rieken, Bhargava and their colleagues stems from studies of growing and crucially halting the growth of crystals at the nanoscale. “This work is already two years old,” says Rieken. “We are only publishing it now because we have spent a long time characterizing this unique and formally unknown material in many different ways before we were satisfied and our peers accepted our work.”
They used X-ray diffraction, Raman, electron microscopy, and superconducting quantum interference device (SQUID) magnetization measurements, as well as observations of the Meissner effect to confirm the transition temperature, composition, phase and nanoscale morphologies of the powders. Eventually they convinced peers in the field, including Jun Akimitsu at Okayama University in Japan, who discovered superconductivity in MgBr2 for the first time, and is now also a co-author on their latest paper on the work.
Future opportunities
Rieken and Bhargava are in the process of securing a patent for their superconducting paint, inks and stereolithographic printer resin. This will require further work to understand the superconducting phenomena as it manifests itself in the powder.
At present the researchers believe that achieving the superconducting state in these nanopowders results from the low distances oxygen atoms have to travel to give rise to the superconducting phase due to the nanosize dimensions of the powder grains. In addition, the high surface area to volume ratio of the nanoscale grains leads to enhanced reactivity, as well as curious high surface-tension effects that make the powder slosh like a liquid when shaken.
Rieken and Bhargava are also keen to investigate how the nanoscale morphologies can be exploited. Since the material is not brittle Rieken suggests it may be possible to form a superconducting wire that is just nanometres wide but can carry significant current. The size and surface area effects could also be exploited in other ways. Large scale production of the material has been demonstrated and superconducting wire is currently under development by True 2 Materials PTE Ltd of Singapore.
“Given the novel morphologies, high reactivity and the properties seen thus far, it is possible that the future would see THz devices, super sensors, SQUID sensors and enhanced MRI scans,” says Rieken. “We’re looking at the possibility of high magnetic fields not yet seen.”
Jeff Tallon, Professor of Physics at Victoria University of Wellington in New Zealand – who is internationally renowned for his research and discoveries in high-temperature superconductors but was not involved with the current research – told nanotechweb.org:
“Producing superconducting nanorods with a simple single-stage process like this is an important first step to aligning these YBCO grains by a powder-in-tube process to make wires. The challenge then will be to achieve biaxial alignment, as is needed for YBCO. It will be important to identify the crystallographic axis of the nanorods and would be most interesting to measure the critical current of individual rods, if possible.”
Water-saving measures in California have also led to substantial reductions in greenhouse gas (GHG) emissions and electricity consumption in the state.
That is the conclusion of new research from the University of California, Davis, published today in the journal Environmental Research Letters.
Measures to cut water use by 25 per cent across California were implemented in 2015, following a four-year drought in the state that caused the fallowing of 542,000 acres of land, total economic costs of $2.74 billion, and the loss of approximately 21,000 jobs.
The UC Davis researchers found that, while the 25 per cent target had not quite been reached over the one-year period – with 524,000 million gallons of water saved – the measures’ impact had positive knock-on effects for other environmental objectives.
In California, the water and energy utility sectors are closely interdependent. The energy used by the conveyance systems that move water from the wetter North to the drier and more heavily populated South, combined with utility energy use for treatment and distribution, end-user water consumption for heating, and additional pumping and treatment, accounts for 19 per cent of total electricity demand and 32 per cent of total non-power plant natural gas demand state-wide.
Lead author Dr Edward Spang, from UC Davis, said: “Due to this close interdependence, we estimated that the decrease in water usage translated into a significant electricity saving of 1,830 gigawatt hours (GWh). Interestingly, those savings were around 11 percent greater than those achieved by investor-owned electricity utilities’ efficiency programs over the same period.
“In turn, we calculated that the GHG emissions saved as a direct result of the reduction in electricity consumption are also significant – in the region of 524,000 metric tons of carbon dioxide equivalent (CO2e). That is the equivalent of taking 111,000 cars off the road for a year.”
To estimate the water, energy, and GHG savings achieved for the duration of the urban water conservation order, the researchers collected and consolidated a range of publicly available data. They sequentially estimated total water savings for each water agency reporting to the California State Water Resources Control Board; the associated energy savings, via spatially resolved estimates of the energy intensity of water supplies by hydrologic region; and finally, the linked GHG emissions reduction, using the emissions factor for the California electricity mix (including both in-state generation and imports).
Finally, they compared of the cost of securing these savings through water conservation to the costs of existing programs that specifically target electricity or GHG savings.
Co-author Professor Frank Loge said: “The scale of these integrated water-energy-GHG savings, achieved over such a short period, is remarkable. Even more interesting is that the cost of achieving these savings through water conservation was competitive with existing programs that specifically target electricity or GHG reductions.
“Our results provide strong support for including direct water conservation in the portfolio of program and technology options for reducing energy consumption and GHG emissions. It’s particularly pertinent given that our analysis was based only on pursuing the individual goals of either electricity savings or greenhouse gas reductions, and not the combined benefits of water, electricity, and GHG savings.
“Taking these three benefits into consideration together would substantially increase the cost-effectiveness of water-focused conservation programs across all scenarios of varying program and technology persistence. There is a strong incentive for water and energy utilities to form partnerships, and identify opportunities to secure these combined resource savings benefits at a shared cost. There would also be a benefit in the associated regulatory agencies supporting these partnerships through aligned policy measures, and targeted funding initiatives.”
Bacteria have been genetically engineered to produce distinctive ultrasound signals, making then potentially useful for medical imaging. Protein-enclosed pockets known as “gas vesicles” are usually found in aquatic bacteria to keep them buoyant, but a team of biophysicists led by Mikhail Shapiro at the California Institute of Technology has created a gene that allows the vesicles to appear in other bacteria.
Shapiro’s team showed how pulses of ultrasound waves passing through their modified bacteria were scattered in characteristic patterns, which were picked up as echoes by the ultrasound detector. This scattering is caused by the density difference between gas vesicles and their surroundings. However, above a certain pulse pressure, the vesicles collapsed, meaning ultrasound waves passed through the bacteria undisturbed. This collapse changed the detected signal abruptly.
Non-invasive tracers
Once the vesicles collapsed, Shapiro’s team realized that the remaining detected signal could be considered background noise, which they could subtract from the scattered signal to produce high-resolution images of where the bacteria were residing. This was a particularly significant observation, as it meant that bacteria carrying the acoustic response gene could be used as non-invasive medical tracers.
Tracers such as luminescent bacteria and radioactive isotopes are used to take images of inside the body. However, these agents are often either disruptive to the body, or cannot image deep below the skin. Shapiro’s bacteria would allow for imaging deep inside the gut, and even inside tumours. The technique could give detailed maps of internal organs, with spatial resolution as small as 100 µm.
Fine-tuned response
In further work, Shapiro and colleagues altered bacterial genetics further to create gas vesicles that produced different scattering signals, and collapsed at different pulse pressures. With multiple different types of bacteria being used as tracers, more complex biological images could be built up. They now hope to diversify the ultrasound capabilities of their bacteria by fine-tuning the acoustic response gene even further.
A substantial investment is needed to build the next generation of astroparticle physics research infrastructures according to a roadmap published by the Astroparticle Physics European Consortium (APPEC). The document – a vision for European astroparticle physics for the next decade – makes 21 recommendations covering research as well as organisational and societal issues such as gender balance, education, public outreach and links with industry.
Astroparticle physics, which involves astronomy, particle physics and cosmology, addresses questions relating to both elementary particles and the evolution of celestial objects. It is a rapidly evolving field that has been awarded four Nobel prizes since 2001 including for the recent direct detection of gravitational waves.
APPEC is a consortium of funding agencies, national government institutions and institutes from 16 European countries that was set up in 2001 to coordinate astroparticle physics research efforts. It published its first strategic vision for astroparticle physics in 2008 and its first roadmap in 2011. The new roadmap outlines its vision for the field from 2017–2026 and was launched at a meeting in Brussels on 9 January.
A collaborative effort
The roadmap highlights large-scale research on “cosmic messengers” as a high priority. These particles are emitted from the highest-energy cosmic sources and can provide detailed insights into the universe and how it functions. They include gamma rays, neutrinos, cosmic rays and gravitational waves. The strategy also calls for detailed studies of neutrinos – focusing on their mass and nature – as well as a focus on dark matter and dark energy.
To support these research areas, the roadmap calls for continued experimental efforts, collaboration and funding, particularly for big projects. With regards to large-scale multi-messenger infrastructures, APPEC signals its continued endorsement for a host of future projects including the Cherenkov Telescope Array, the Cubic Kilometre Neutrino Telescope as well as an upgrade to the Pierre Auger Observatory and the development of next-generation of gravitational interferometers such as the Einstein Telescope.
To support astroparticle physics research, APPEC also highlights the importance of theoretical work, detector research and development as well as computing resources. While recognising that many European institutes are expanding their work on astroparticle physics theory, the report calls for the establishment of a European centre for astroparticle physics theory. The roadmap also encourages members to apply for European Union grants to fund detector R&D and states that future instruments will require massive computer resources. In addition, it encourages the use of standard data formats and making data public.
In a statement, Antonio Masiero, chair of APPEC and a physicist at the National Institute for Nuclear Physics in Italy, says that addressing the roadmap’s research priorities will tell us about the origins, evolution and structure of the universe as well as reshape our understanding of physics. “This work needs a collaborative effort and APPEC is at the centre of making sure that the European research community and experiments are at the global forefront of these discoveries,” he adds. “Building on the successful work of the previous APPEC roadmap in 2011, and based on recommendations from across the full breadth of the astroparticle physics research and funding communities, APPEC will collaborate with our colleagues to implement the strategy recommendations.”
New mobile nanotweezers that can capture and release sub-micron-sized particles in a fluid more quickly and efficiently than ever before have been constructed by researchers at the Indian Institute of Science in Bangalore. The devices, which are based on ferromagnetic helical nanostructures integrated with silver nanoparticles that produce mechanical force in response to light, can be used to manipulate objects such as bacteria, colloidal beads and fluorescent nanodiamonds. They could find use in applications as diverse as lab-on-a-chip technology, in microfluidics and nanoscale assembly to name a few.
Being able to manipulate nanoscale objects in liquid environments is one of the main goals of modern nanotechnology. Researchers usually do this by trapping particles with optical, acoustic, magnetic, electric or flow fields, and such technologies have led to breakthroughs in biophysics and microfluidics in recent years. Since the trapping force decreases with the size of the object, it is difficult to control objects that are sub-micron in size using these techniques, however.
Plasmonic tweezers (which work by exploiting the localized electromagnetic fields near metallic nanostructures) are a good alternative here but their drawback is that they can only pick up and move nanoobjects very slowly since they are fixed in space. Another strategy is to make use of microrobots. These are micro-to nanosized motile particles driven by chemical reactions or externally applied magnetic fields, and although they can carry and push objects very quickly, again they cannot be used to manipulate nanoscale objects.
Combining plasmonic tweezers and microbots
A team led by Ambarish Ghosh says that it has now combined these two technologies. “As well as being able to carry small objects to various spots on a microfluidic device, our nanotweezers can also be localized with high spatial resolution and can be removed from a device if necessary,” explains Ghosh. “This should open up new avenues in nanoscale assembly that did not exist before.”
The microbots made by the researchers are based on ferromagnetic, screw-shaped ferromagnetic nanostructures. They grew the structures using electron-beam evaporation of silicon dioxide on a pre-patterned substrate that is kept at an extreme angle to the incoming vapor flux and rotated slowly to achieve the helical shape. Finally, plasmonic silver nanoparticles are also placed on certain regions of the helical body.
Optical trapping and microbot motion
“The plasmonic islands work as nanoscale antennas,” explains Ghosh, “so that when we illuminate them with a light beam, a strong trapping force is generated that can pull in nearby colloidal nanoparticles. This trapping force vanishes once the light source is switched off, which means that we can tune our trapping and releasing mechanism by turning the incident illumination on and off.”
The microbots can be manoeuvred in space using a rotating, homogenous magnetic field and their motion is very similar to way that some microorganisms translate by rotating their helical flagella.
Applications in microfluidics and nanoscale assembly
According to the researchers, the microbots might find use in microfluidics for positioning, sorting, transporting and collecting single parties as small as 100 nm, something that was not possible before. “Especially interesting is being able to trap and carry live bacteria, since this technique makes use of a very low light intensity beam (that does not damage the microorganisms),” says Ghosh.
“They might also be used in nanoscale assembly to place very small objects such as nanodiamonds and quantum dots on specific positions on a device. Being able to do this could be important for next-generation quantum technologies, sensing devices, nanolasers and many more.”
The team, reporting its work in Science Robotics DOI: 10.1126/scirobotics.aaq0076, is now busy making multiple nanotweezers that work in parallel. “This will allow us to scale up our technology and will surely have some commercial impact,” Ghosh tells nanotechweb.org. “Indeed, initial results are promising.”