Available Resources
CPB hardware
The Center for Predictive Bioresilience (CPB) leverages high-performance computing hardware and software to design safe, effective, and readily manufacturable therapeutics for emerging biothreats. Learn about the world-class tools available to CPB members who contribute to advancing our mission.
0
Petaflops of Performance
Tuolumne cluster
A sister system of Livermore’s El Capitan, Tuolumne combines powerful CPU processing speeds with an innovative approach to GPU technology. Tuolumne allows CPB members to solve science problems faster by unlocking new ways to handle data creation and calculations.
0
Petaflops of Performance
Mammoth cluster
Mammoth is a big memory cluster that offers more capacity than any other machine at LLNL. It allows researchers to run experiments or research calculations on complete data sets and radically reduces the time needed for analysis by increasing throughput.
CPB software and tools
CPB members use three types of software packages in their work: biophysical, machine learning, and workflow management. In the past, biophysical codes provided the inputs for machine learning. Today, machine learning codes that run on data generated by biophysical codes are connected back to the biophysical codes to create workflows that refine their inputs and increase run speed.

Machine learning frameworks
Our machine learning (ML) frameworks use data from biophysical codes and other sources to conduct cutting-edge experiments.

Workflow development
CPB-designed tools allow us to create highly optimized workflows in the Livermore Computing (LC) environment. These workflows are used to generate data as well as visualize and analyze the data.

Data science
Our machine learning frameworks and unique data utilization tools let us generate unique hypotheses for a very broad range of challenges in material sciences, healthcare, and biosecurity.

Simulation and analysis
We use HPC enabled Molecular Dynamics engines and docking tools that run at scale in the LC environment. Visualization and analysis tools are available for extremely large datasets.

Data ecosystem
LLNL is building a robust data ecosystem to enable development and distribution of high quality, hard-to-reproduce data for external partners in academia, industry, and other DOE laboratories. LLNL currently has data ecosystems in place for local data sharing, but the long-term plan is to extend them into an ecosystem that enables internal and external collaboration on data.
Research areas supported
- Human genomics
- Pathogen genomics
- Chemical structure and descriptors
- Protein structure and interactions
- Knowledge networks
Data types
- Machine learning modeling
- Experimental
- Physics-based simulation
- DNA sequence databases
- Protein sequence, structure, and interaction databases
- Chemical-biology interaction databases
Experimental platforms
We are developing an ultra-high-throughput data generation engine to produce high-quality datasets at unprecedented scale to enable training of generalizable machine learning algorithms for biologics design.
With an initial focus on high-resolution profiling of the antigen/antibody interaction landscape, our ongoing methodological development and integration with active learning will enable characterization and rapid AI-based design of powerful biologics.
Experimental capabilities
- Liquid handling robotics
- Rapid cell sorting
- Droplet microfluidics
- Array-based cell-free protein synthesis
- High-throughput imaging and sequencing
- Large library screening and development
- Microbial detection
- Human experimental models

Work with the Center for Predictive Bioresilience
Collaboration is a key element of CPB’s success. Learn more about how we work with industry, academic, and national laboratories to extend and refine our unique approach to medical countermeasure design.