We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Aurora Early Science Program Facilitates Effort To Create More Efficient Solar Cells

Aurora Early Science Program Facilitates Effort To Create More Efficient Solar Cells content piece image
Credit: Unsplash https://unsplash.com/@publicpowerorg
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

Looking for new materials that make photovoltaic solar cells more efficient is a challenge that has taxed current supercomputing resources to the max. That’s why a number of academic institutions are collaborating with Carnegie Mellon University to tackle the task. These efforts hope to utilize Aurora, the forthcoming exascale computer at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, to further their research.

Argonne ESP selects projects to test run on Aurora before system launch

The Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, hosts the Early Science Program (ESP) to ensure its next-generation systems are ready to hit the ground running. Fifteen ESP computational science and engineering research projects from national laboratories and universities were selected to prepare their research to run on Argonne’s Aurora exascale supercomputer, which is scheduled to launch next year.The data and learning projects support the ALCF’s efforts to create an environment that enables data science and machine learning approaches alongside traditional simulation-based research. The teams receive hands-on assistance to port and optimize their applications for the new architecture using systems available today and early Aurora hardware when it is available.

Carnegie Mellon University project: researching potential materials to create more efficient solar cells

An ESP team led by Carnegie Mellon University plans to use Aurora to find materials that can increase the efficiency of solar cells. The Carnegie Mellon team uses machine learning tools extensively in their research and is working with the developers of BerkeleyGW, SISSO, and Dragonfly software to prepare to run on the Aurora system. Noa Marom (Assistant Professor, Department of Materials Science and Engineering, Carnegie Mellon University) is the Principal Investigator on the project. Co-Principal Investigators include: Jack Deslippe from Lawrence Berkeley National Laboratory (LBNL) who is the principal developer of the BerkeleyGW code; Luca Ghiringhelli (Fritz Haber Institute of the Max Planck Society and developer of the  SISSO machine learning software); and Barnabás Póczos (Associate Professor, Machine Learning Department, Carnegie Mellon University).

According to Marom, “The goal of our research is to find new materials that make photovoltaic solar cells more efficient. The quest for any new materials that can enable new technologies is challenging. The materials we are researching have unique properties that make them suitable for use in solar cells, and these properties are very rare and difficult to find out of the wide array of possible materials. We are trying to accelerate the process of material discovery through computer simulation on high-performance computers (HPC) using sophisticated quantum-mechanical simulation software and machine learning (ML) tools. We are excited that our project has been accepted as one of the projects that will run on the future Aurora supercomputer as part of the Argonne ESP program. Our multi-institution team is currently modifying algorithms and workflows so they will be able to run on Aurora.”

The search to increase electric current in solar cells

Solar cells convert photons from the sun into electricity. When a photon is absorbed, an electron is promoted from an occupied state to an unoccupied state, leaving a positively charged “hole” behind. The electron and hole, attracted to each other by an electrostatic force, form a bound complex called an exciton. Excitons are separated and converted into electric current. Usually one photon is converted into one charge carrier. The Carnegie Mellon team is looking for materials that can undergo singlet fission (SF), a process by which one photogenerated singlet (same spin) exciton is converted into two triplet (opposite spin) excitons. This may significantly increase the efficiency of solar cells by harvesting two charge carriers from one photon. The goal of the research is to find the rare materials that can undergo SF to improve solar cell efficiency.


Figure 1 shows a schematic of exciton harvesting with and without singlet fission (SF). A singlet exciton (green) is generated when one photon is absorbed. (A) Without SF, a singlet exciton only creates one carrier. (B) With SF, a singlet exciton converts into two triplet excitons (orange), generating two carriers. The arrows indicate spins. Credit: Xiaopeng Wang, Courtesy of Carnegie Mellon University

Using machine learning to accelerate discovery of singlet fission materials

According to Marom, finding materials that can undergo singlet fission is like looking for a needle in a haystack: “Our team uses machine learning (ML) tools to help accelerate the computer simulations used to discover new materials. You must run advanced theoretical models on supercomputers to predict whether a material will undergo singlet fission. These simulations are computationally expensive, meaning they take several million hours of computer time. It is not feasible to do large-scale materials screening and calculate hundreds of thousands of materials using the most accurate models and techniques. We use ML predictive tools to build low-cost models that are fast to calculate and strongly correlate with the more sophisticated simulations.”

Multi-fidelity screening approach to materials discovery

The multi-fidelity screening approach developed at Carnegie Mellon integrates quantum mechanical simulations at two levels of fidelity with three ML models. Data acquisition with quantum mechanical simulations is the computational bottleneck in the process of materials discovery. This is due to the high computational cost of accurate simulations and the massive amount of data that must be sampled and analyzed. Candidate materials are sampled, not only to discover potential SF candidates but also to train the ML models. Improving predictions from ML models requires sampling a large range of materials including those that are not capable of SF. Machine learning is used to make decisions on which materials to sample and at what level of fidelity in order to maximize the chances of discovery and the information gain.

Supercomputers and software accelerate the research

The Carnegie Mellon University research is currently being done on the Theta supercomputer at Argonne National Laboratory and Cori at NERSC, which are some of the fastest HPC systems in the world used for scientific and university research. Cori and Theta are both Cray systems using advanced Intel Xeon processors.

Parallelization work is done on the BerkeleyGW code, which is one of the most accurate quantum mechanical simulations for data acquisition. According to Deslippe, “Many common applications used in materials science and chemistry are based on density functional theory, which is only accurate for the properties of materials in their ground-state (the lowest-energy state). BerkeleyGW is a materials science simulation package that can predict the excited state properties of materials, which is how materials respond to a stimulant such as photon absorption. The BerkeleyGW code is highly parallelized to run on the full Cori system at NERSC and is already optimized for Intel Xeon processors running on supercomputers. While the BerkeleyGW code is highly accurate, it was often considered expensive in terms of computer time required to run the code. Our team has optimized the BerkeleyGW code so that it is not only an accurate predictive tool but also scales to peak performance on modern architectures, which allows researchers to model up to tens of thousands of atoms—something that was previously impossible.”

Preparing for the Aurora supercomputer

The U.S. Department of Energy selected Intel and Cray to deliver Aurora, one of the nation’s first exascale supercomputers, to Argonne National Laboratory in 2021. Aurora will be based upon future generations of Intel Xeon Scalable processors and Intel Optane DC Persistent Memory, and a new Intel Xe GPU (Graphics Processing Unit) architecture.


Figure 2. Aurora supercomputer, Courtesy of Argonne National Laboratory


The team at Carnegie Mellon is working to optimize their workflows in preparation for running on the Aurora system. William Huhn, Computer Scientist, Argonne Leadership Computing Facility, says, “The primary computational expense for our workflow is predicting optical excitation properties using the BerkeleyGW package. BerkeleyGW was written from the ground up to scale across thousands of HPC nodes, making it a natural candidate for the exascale capabilities of Aurora. The first-class support for machine learning frameworks on Aurora, in particular its extensive usage of Python libraries and ALCF's Balsam workflow manager, will prove valuable for the deployment of the machine learning workflow on Aurora.”

With the advanced machine learning and other capabilities of the new exascale computer at Argonne, the search for new materials that make photovoltaic solar cells more efficient should be much more attainable.

References

1. Phenylated Acene Derivatives as Candidates for Intermolecular Singlet Fission, Xiaopeng Wang, Xingyu Liu, Rithwik Tom, Cameron Cook,§ Bohdan Schatschneider, and Noa Marom, The Journal of Physical Chemistry C 2019 123(10),5890-5899 DOI: 10.1021/acs.jpcc.8b12549 http://pubs.acs.org/action/showCitFormats?doi=10.1021/acs.jpcc.8b12549

2. On the possibility of singlet fission in crystalline quaterrylene, Xiaopeng Wang, Xingyu Liu, Cameron Cook, Bohdan Schatschneider, and Noa Marom, The Journal of Chemical Physics 148, 184101 (2018); doi: 10.1063/1.5027553, https://doi.org/10.1063/1.5027553

Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.

This article was produced as part of Intel’s editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC and AI communities through advanced technology. The publisher of the content has final editing rights and determines what articles are published.