You can run this notebook online in a Binder session or view it on Github.

QCFractal

Full QCFractal documentation is available.

This tutorial will go over general QCFractal usage to give a feel for the ecosystem. In this tutorial, we employ Snowflake, a simple QCFractal stack which runs on a local machine for demonstration and exploration purposes.

Installation

To begin this quickstart tutorial, first install the QCArchive Snowflake environment from conda:

conda env create qcarchive/qcf-snowflake -n snowflake
conda activate snowflake

If you have a pre-existing environment with qcfractal, ensure that rdkit and geometric are installed from the conda-forge channel and psi4 from the psi4 channel. It should be noted that QCFractal does not come with any compute backend by default and they must be installed individually.

Importing QCFractal

First let us import two items from the ecosystem:

Typically we alias qcportal as ptl. We will do the same for qcfractal.interface so that the code can be used anywhere.

[1]:
from qcfractal import FractalSnowflakeHandler
import qcfractal.interface as ptl

We can now build a temporary server which acts just like a normal server, but we have a bit more direct control of it.

Warning! All data is lost when this notebook shuts down! This is for demonstration purposes only! For information about how to setup a permanent QCFractal server, see the Setup Quickstart Guide.

[2]:
server = FractalSnowflakeHandler()
server
[2]:

FractalSnowflakeHandler

  • Server:   db_eca84388_570c_449a_9b72_44ac0885ea66
  • Address:   https://localhost:55332

We can then build a typical FractalClient to automatically connect to this server using the client() helper command. Note that the server names and addresses are identical in both the server and client.

[3]:
client = server.client()
client
[3]:

FractalClient

  • Server:   db_eca84388_570c_449a_9b72_44ac0885ea66
  • Address:   https://localhost:55332/
  • Username:   None

Adding and Querying data

A server starts with no data, so let’s add some! We can do this by adding a water molecule at a poor geometry from XYZ coordinates. Note that all internal QCFractal values are stored and used in atomic units; whereas, the standard Molecule.from_data() assumes an input of Angstroms. We can switch this back to Bohr by adding a units command in the text string.

[4]:
mol = ptl.Molecule.from_data("""
O 0 0 0
H 0 0 2
H 0 2 0
units bohr
""")
mol

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the 3dmol extension:
jupyter labextension install jupyterlab_3dmol

[4]:
<Molecule(name='H2O' formula='H2O' hash='58e5adb')>

We can then measure various aspects of this molecule to determine its shape. Note that the measure command will provide a distance, angle, or dihedral depending if 2, 3, or 4 indices are passed in.

This molecule is quite far from optimal, so let’s run an geometry optimization!

[5]:
print(mol.measure([0, 1]))
print(mol.measure([1, 0, 2]))
2.0
90.0

Evaluating a Geometry Optimization

We originally installed psi4 and geometric, so we can use these programs to perform a geometry optimization. In QCFractal, we call a geometry optimization a procedure, where procedure is a generic term for a higher level operation that will run multiple individual quantum chemistry energy, gradient, or Hessian evaluations. Other procedure examples are finite-difference computations, n-body computations, and torsiondrives.

We provide a JSON-like input to the client.add_procedure() command to specify the method, basis, and program to be used. The qc_spec field is used in all procedures to determine the underlying quantum chemistry method behind the individual procedure. In this way, we can use any program or method that returns a energy or gradient quantity to run our geometry optimization! (See also add_compute().)

[6]:
spec = {
    "keywords": None,
    "qc_spec": {
        "driver": "gradient",
        "method": "b3lyp-d3",
        "basis": "6-31g",
        "program": "psi4"
    },
}

# Ask the server to compute a new computation
r = client.add_procedure("optimization", "geometric", spec, [mol])
print(r)
print(r.ids)
ComputeResponse(nsubmitted=1 nexisting=0)
['1']

We can see that we submitted a single task to be evaluated and the server has not seen this particular procedure before. The ids field returns the unique id of the procedure. Different procedures will always have a unique id, while identical procedures will always return the same id. We can submit the same procedure again to see this effect:

[7]:
r2 = client.add_procedure("optimization", "geometric", spec, [mol])
print(r)
print(r.ids)
ComputeResponse(nsubmitted=1 nexisting=0)
['1']

Querying Procedures

Once a task is submitted, it will be placed in the compute queue and evaluated. In this particular case the FractalSnowflakeHandler uses your local hardware to evaluate these jobs. We recommend avoiding large tasks!

In general, the server can handle anywhere between laptop-scale resources to many hundreds of thousands of concurrent cores at many physical locations. The amount of resources to connect is up to you and the amount of compute that you require.

Since we did submit a very small job it is likely complete by now. Let us query this procedure from the server using its id like so:

[10]:
proc = client.query_procedures(id=r.ids)[0]
proc
[10]:
<OptimizationRecord(id='1' status='COMPLETE')>

This OptimizationRecord object has many different fields attached to it so that all quantities involved in the computation can be explored. For this example, let us pull the final molecule (optimized structure) and inspect the physical dimensions.

Note: if the status does not say COMPLETE, these fields will not be available. Try querying the procedure again in a few seconds to see if the task completed in the background.

[11]:
final_mol = proc.get_final_molecule()
[12]:
print(final_mol.measure([0, 1]))
print(final_mol.measure([1, 0, 2]))
final_mol
1.8441303967690752
108.31440111097281

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the 3dmol extension:
jupyter labextension install jupyterlab_3dmol

[12]:
<Molecule(name='H2O' formula='H2O' hash='573ea85')>

This water molecule has bond length and angle dimensions much closer to expected values. We can also plot the optimization history to see how each step in the geometry optimization affected the results. Though the chart is not too impressive for this simple molecule, it is hopefully illuminating and is available for any geometry optimization ever completed.

[13]:
proc.show_history()

Collections

Submitting individual procedures or single quantum chemistry tasks is not typically done as it becomes hard to track individual tasks. To help resolve this, Collections are different ways of organizing standard computations so that many tasks can be referenced in a more human-friendly way. In this particular case, we will be exploring an intermolecular potential dataset.

To begin, we will create a new dataset and add a few intermolecular interactions to it.

[14]:
ds = ptl.collections.ReactionDataset("My IE Dataset", ds_type="ie", client=client, default_program="psi4")

We can construct a water dimer that has fragments used in the intermolecular computation with the -- divider. A single water molecule with ghost atoms can be extracted like so:

[15]:
water_dimer = ptl.Molecule.from_data("""
O 0.000000 0.000000  0.000000
H 0.758602 0.000000  0.504284
H 0.260455 0.000000 -0.872893
--
O 3.000000 0.500000  0.000000
H 3.758602 0.500000  0.504284
H 3.260455 0.500000 -0.872893
""")

water_dimer.get_fragment(0, 1)

You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the 3dmol extension:
jupyter labextension install jupyterlab_3dmol

[15]:
<Molecule(name='H4O2 ([0],[1])' formula='H4O2' hash='8248da9')>

Many molecular entries can be added to this dataset where each is entry is a given intermolecular complex that is given a unique name. In addition, the add_ie_rxn method to can automatically fragment molecules.

[16]:
ds.add_ie_rxn("water dimer", water_dimer)
ds.add_ie_rxn("helium dimer", """
He 0 0 -3
--
He 0 0 3
""")
[16]:
<ReactionRecord(ProtoModel)>

Once the Collection is created it can be saved to the server so that it can always be retrived at a future date

[17]:
ds.save()
[17]:
'1'

The client can list all Collections currently on the server and retrive collections to be manipulated:

[18]:
client.list_collections()
[18]:
tagline
collection name
ReactionDataset My IE Dataset None
[19]:
ds = client.get_collection("ReactionDataset", "My IE Dataset")

Computing with collections

Computational methods can be applied to all of the reactions in the dataset with just a few simple lines:

[20]:
ds.compute("B3LYP-D3", "def2-SVP")
[20]:
<ComputeResponse(nsubmitted=10 nexisting=0)>

By default this collection evaluates the non-counterpoise corrected interaction energy which typically requires three computations per entry (the complex and each monomer). In this case we compute the B3LYP and -D3 additive correction separately, nominally 12 total computations. However the collection is smart enough to understand that each Helium monomer is identical and does not need to be computed twice, reducing the total number of computations to 10 as shown here. We can continue to compute additional methods. Again, this is being evaluated on your computer! Be careful of the compute requirements.

[21]:
ds.compute("PBE-D3", "def2-SVP")
[21]:
<ComputeResponse(nsubmitted=10 nexisting=0)>

A list of all methods that have been computed for this dataset can also be shown:

[22]:
ds.list_history()
[22]:
stoichiometry
driver program method basis keywords
energy psi4 b3lyp def2-svp None default
b3lyp-d3 def2-svp None default
pbe def2-svp None default
pbe-d3 def2-svp None default

The above only shows what has been computed and does not pull this data from the server to your computer. To do so, the get_history command can be used. As the commands are being executed in the backend we need to wait a bit to get the history again when the computations are complete. The force=True flag will re-query the database rather than using cached data.

[23]:
ds.get_history(force=True)
[23]:
driver program method basis keywords stoichiometry name
0 energy psi4 b3lyp def2-svp NaN default B3LYP/def2-svp
1 energy psi4 b3lyp-d3 def2-svp NaN default B3LYP-D3/def2-svp
2 energy psi4 pbe def2-svp NaN default PBE/def2-svp
3 energy psi4 pbe-d3 def2-svp NaN default PBE-D3/def2-svp

Underlying the Collection is a pandas DataFrame which can show all results:

[25]:
print(f"DataFrame units: {ds.units}")
ds.df
DataFrame units: kcal / mol
[25]:
B3LYP/def2-svp B3LYP-D3/def2-svp PBE/def2-svp PBE-D3/def2-svp
water dimer -4.751916 -5.577718 -5.115871 -5.632224
helium dimer -0.000346 -0.000848 -0.000387 -0.000864

You can also visualize results and more!

[26]:
ds.visualize(["B3LYP-D3", "PBE-D3"], "def2-SVP", bench="B3LYP/def2-svp", kind="violin")

Conclusion

These are just some of the capabilities QCFractal offers, check out more documentation. If you like the project, consider starring us on GitHub or if you have any questions, join our Slack channel.