Gridoptimization Records#

serialize_key(key)[source]#

Serializes the key used to map to optimization calculations

A string key is used for preoptimization

Parameters:

key (str | Sequence[int]) – A string or sequence of integers denoting the position in the grid

Returns:

A string representation of the key

Return type:

str

deserialize_key(key)[source]#

Deserializes the key used to map to optimization calculations

This turns the key back into a form usable for creating constraints

Parameters:

key (str)

Return type:

str | Tuple[int, …]

class ScanTypeEnum(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Bases: str, Enum

The type of scan to perform. This choices is limited to the scan types allowed by the scan dimensions.

distance = 'distance'#
angle = 'angle'#
dihedral = 'dihedral'#
class StepTypeEnum(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Bases: str, Enum

The types of steps to take in a scan dimension: either in absolute or relative terms. relative indicates that the values are relative to the starting value (e.g., a bond starts as 2.1 Bohr, relative steps of [-0.1, 0, 1.0] indicate grid points of [2.0, 2.1, 3.1] Bohr. An absolute step_type will be exactly those values instead.”

absolute = 'absolute'#
relative = 'relative'#
pydantic model ScanDimension[source]#

Bases: BaseModel

A full description of a dimension to scan over.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
Validators:
field type: ScanTypeEnum [Required]#

The type of scan to perform. This choices is limited to the scan types allowed by the scan dimensions.

Validated by:
field indices: List[int] [Required]#

The indices of atoms to select for the scan. The size of this is a function of the type. e.g., distances, angles and dihedrals require 2, 3, and 4 atoms, respectively.

Validated by:
field steps: List[float] [Required]#

Step sizes to scan in relative to your current location in the scan. This must be a strictly monotonic series.

Validated by:
field step_type: StepTypeEnum [Required]#

The types of steps to take in a scan dimension: either in absolute or relative terms. relative indicates that the values are relative to the starting value (e.g., a bond starts as 2.1 Bohr, relative steps of [-0.1, 0, 1.0] indicate grid points of [2.0, 2.1, 3.1] Bohr. An absolute step_type will be exactly those values instead.”

Validated by:
validator check_lower_type_step_type  »  type, step_type[source]#
validator check_indices  »  indices[source]#
validator check_steps  »  steps[source]#
pydantic model GridoptimizationKeywords[source]#

Bases: BaseModel

Keywords for grid optimizations

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field scans: List[ScanDimension] = []#

The dimensions to scan along (along with their options) for the Gridoptimization.

field preoptimization: bool = True#

If True, first runs an unrestricted optimization before starting the grid computations. This is especially useful when combined with relative step_types.

pydantic model GridoptimizationSpecification[source]#

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field program: ConstrainedStrValue = 'gridoptimization'#
field optimization_specification: OptimizationSpecification [Required]#
field keywords: GridoptimizationKeywords [Required]#
pydantic model GridoptimizationAddBody[source]#

Bases: RecordAddBodyBase

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

  • validate_assignment: bool = True

Fields:
field specification: GridoptimizationSpecification [Required]#
field initial_molecules: List[int | Molecule] [Required]#
field tag: constr(to_lower=True) [Required]#
field priority: PriorityEnum [Required]#
field owner_group: str | None = None#
field find_existing: bool = True#
pydantic model GridoptimizationQueryFilters[source]#

Bases: RecordQueryFilters

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

  • validate_assignment: bool = True

Fields:
Validators:
field program: List[str] | None = None#
field optimization_program: List[str] | None = None#
field qc_program: List[constr(to_lower=True)] | None = None#
field qc_method: List[constr(to_lower=True)] | None = None#
field qc_basis: List[constr(to_lower=True) | None] | None = None#
Validated by:
  • _convert_basis

field initial_molecule_id: List[int] | None = None#
validator parse_dates  »  created_before, created_after, modified_after, modified_before#
validator validate_lists  »  limit, cursor#
field record_id: List[int] | None = None#
field record_type: List[str] | None = None#
field manager_name: List[str] | None = None#
field status: List[RecordStatusEnum] | None = None#
field dataset_id: List[int] | None = None#
field parent_id: List[int] | None = None#
field child_id: List[int] | None = None#
field created_before: datetime | None = None#
Validated by:
  • parse_dates

field created_after: datetime | None = None#
Validated by:
  • parse_dates

field modified_before: datetime | None = None#
Validated by:
  • parse_dates

field modified_after: datetime | None = None#
Validated by:
  • parse_dates

field owner_user: List[int | str] | None = None#
field owner_group: List[int | str] | None = None#
field limit: int | None = None#
Validated by:
  • validate_lists

field cursor: int | None = None#
Validated by:
  • validate_lists

pydantic model GridoptimizationOptimization[source]#

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field optimization_id: int [Required]#
field key: str [Required]#
field energy: float | None = None#
pydantic model GridoptimizationRecord[source]#

Bases: BaseRecord

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • allow_mutation: bool = True

  • extra: Extra = Extra.forbid

  • validate_assignment: bool = True

Fields:
Validators:
field record_type: Literal['gridoptimization'] = 'gridoptimization'#
field specification: GridoptimizationSpecification [Required]#
field starting_grid: List[int] | None = None#
field initial_molecule_id: int [Required]#
field starting_molecule_id: int | None = None#
field initial_molecule_: Molecule | None = None (alias 'initial_molecule')#
field starting_molecule_: Molecule | None = None (alias 'starting_molecule')#
field optimizations_: List[GridoptimizationOptimization] | None = None (alias 'optimizations')#
propagate_client(client)[source]#

Propagates a client and related information to this record to any fields within this record that need it

This is expected to be called from derived class propagate_client functions as well

property initial_molecule: Molecule#
property starting_molecule: Molecule | None#
property optimizations: Dict[Any, OptimizationRecord]#
property preoptimization: OptimizationRecord | None#
property final_energies: Dict[Tuple[int, ...], float]#
property children_errors: List[BaseRecord]#

Returns errored child records

property children_status: Dict[RecordStatusEnum, int]#

Returns a dictionary of the status of all children of this record

property comments: List[RecordComment] | None#
property compute_history: List[ComputeHistory]#
property error: Dict[str, Any] | None#
fetch_children(include=None, force_fetch=False)#

Fetches all children of this record recursively

Parameters:
  • include (Iterable[str] | None)

  • force_fetch (bool)

classmethod fetch_children_multi(records, include=None, force_fetch=False)#

Fetches all children of the given records

This tries to work efficiently, fetching larger batches of children that can span multiple records

Parameters:
  • records (Iterable[BaseRecord | None])

  • include (Iterable[str] | None)

  • force_fetch (bool)

classmethod get_subclass(record_type)#

Obtain a subclass of this class given its record_type

Parameters:

record_type (str)

Return type:

Type[BaseRecord]

get_waiting_reason()#
Return type:

Dict[str, Any]

property native_files: Dict[str, NativeFile] | None#
property offline: bool#
property provenance: Provenance | None#
property service: RecordService | None#
property stderr: str | None#
property stdout: str | None#
sync_to_cache(detach=False)#

Syncs this record to the cache

If detach is True, then the record will be removed from the cache

Parameters:

detach (bool)

property task: RecordTask | None#
field id: int [Required]#
field is_service: bool [Required]#
field properties: Dict[str, Any] | None = None#
field extras: Dict[str, Any] = {}#
Validated by:
  • _validate_extras

field status: RecordStatusEnum [Required]#
field manager_name: str | None = None#
field created_on: datetime [Required]#
field modified_on: datetime [Required]#
field owner_user: str | None = None#
field owner_group: str | None = None#
field compute_history_: List[ComputeHistory] | None = None (alias 'compute_history')#
field task_: RecordTask | None = None (alias 'task')#
field service_: RecordService | None = None (alias 'service')#
field comments_: List[RecordComment] | None = None (alias 'comments')#
field native_files_: Dict[str, NativeFile] | None = None (alias 'native_files')#
pydantic model GridoptimizationDatasetNewEntry[source]#

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field name: str [Required]#
field initial_molecule: Molecule | int [Required]#
field additional_keywords: Dict[str, Any] = {}#
field additional_optimization_keywords: Dict[str, Any] = {}#
field attributes: Dict[str, Any] = {}#
field comment: str | None = None#
pydantic model GridoptimizationDatasetEntry[source]#

Bases: GridoptimizationDatasetNewEntry

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field initial_molecule: Molecule [Required]#
field name: str [Required]#
field additional_keywords: Dict[str, Any] = {}#
field additional_optimization_keywords: Dict[str, Any] = {}#
field attributes: Dict[str, Any] = {}#
field comment: str | None = None#
pydantic model GridoptimizationDatasetSpecification[source]#

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field name: str [Required]#
field specification: GridoptimizationSpecification [Required]#
field description: str | None = None#
pydantic model GridoptimizationDatasetRecordItem[source]#

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • extra: Extra = Extra.forbid

Fields:
field entry_name: str [Required]#
field specification_name: str [Required]#
field record_id: int [Required]#
field record: GridoptimizationRecord | None = None#
pydantic model GridoptimizationDataset[source]#

Bases: BaseDataset

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Config:
  • allow_mutation: bool = True

  • extra: Extra = Extra.forbid

  • validate_assignment: bool = True

Fields:
field dataset_type: Literal['gridoptimization'] = 'gridoptimization'#
add_specification(name, specification, description=None)[source]#
Parameters:
Return type:

InsertMetadata

add_entries(entries)[source]#
Parameters:

entries (GridoptimizationDatasetNewEntry | Iterable[GridoptimizationDatasetNewEntry])

Return type:

InsertMetadata

add_entry(name, initial_molecule, additional_keywords=None, additional_optimization_keywords=None, attributes=None, comment=None)[source]#
Parameters:
  • name (str)

  • initial_molecule (Molecule | int)

  • additional_keywords (Dict[str, Any] | None)

  • additional_optimization_keywords (Dict[str, Any] | None)

  • attributes (Dict[str, Any] | None)

  • comment (str | None)

assert_is_not_view()#
assert_online()#
property attachments: List[DatasetAttachment]#
background_submit(entry_names=None, specification_names=None, tag=None, priority=None, find_existing=True)#

Adds a dataset submission internal job to the server

This internal job is the one to actually do the submission, which can take a while.

You can check the progress of the internal job using the return object.

See submit() for info on the function parameters.

Returns:

An internal job object that can be watch or used to determine the progress of the job.

Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • tag (str | None)

  • priority (PriorityEnum)

  • find_existing (bool)

Return type:

InternalJob

cancel_records(entry_names=None, specification_names=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • refetch_records (bool)

compile_values(value_call, value_names='value', entry_names=None, specification_names=None, unpack=False)#

Compile values from records into a pandas DataFrame.

Parameters:
  • value_call (Callable) – Function to call on each record to extract the desired value. Must return a scalar value or a sequence of values if ‘unpack’ is set to True.

  • value_names (Union[Sequence[str], str]) – Column name(s) for the extracted value(s). If a string is provided and multiple values are returned by ‘value_call’, columns are named by appending an index to this string. If a list of strings is provided, it must match the length of the sequence returned by ‘value_call’. Default is “value”.

  • entry_names (Optional[Union[str, Iterable[str]]]) – Entry names to filter records. If not provided, considers all entries.

  • specification_names (Optional[Union[str, Iterable[str]]]) – Specification names to filter records. If not provided, considers all specifications.

  • unpack (bool) – If True, unpack the sequence of values returned by ‘value_call’ into separate columns. Default is False.

Returns:

A multi-index DataFrame where each row corresponds to an entry. Each column corresponds has a top level index as a specification, and a second level index as the appropriate value name. Values are extracted from records using ‘value_call’.

Return type:

pandas.DataFrame

Raises:

ValueError – If the length of ‘value_names’ does not match the number of values returned by ‘value_call’ when ‘unpack’ is set to True.

Notes

  1. The DataFrame is structured such that the rows are entries and columns are specifications.

2. If ‘unpack’ is True, the function assumes ‘value_call’ returns a sequence of values that need to be distributed across columns in the resulting DataFrame. ‘value_call’ should always return the same number of values for each record if unpack is True.

property computed_properties#
property contributed_values: Dict[str, ContributedValues]#
copy_entries_from(source_dataset_id, entry_names=None)#

Copies entries from another dataset into this one

If entries already exist with the same name, an exception is raised.

Parameters:
  • source_dataset_id (int) – The ID of the dataset to copy entries from

  • entry_names (str | Iterable[str] | None) – Names of the entries to copy. If not provided, all entries will be copied.

copy_records_from(source_dataset_id, entry_names=None, specification_names=None)#

Copies records from another dataset into this one

Entries and specifications will also be copied. If entries or specifications already exist with the same name, an exception is raised.

This does not actually fully copy records - the records will be linked to both datasets

Parameters:
  • source_dataset_id (int) – The ID of the dataset to copy entries from

  • entry_names (str | Iterable[str] | None) – Names of the entries to copy. If not provided, all entries will be copied.

  • specification_names (str | Iterable[str] | None) – Names of the specifications to copy. If not provided, all specifications will be copied.

copy_specifications_from(source_dataset_id, specification_names=None)#

Copies specifications from another dataset into this one

If specifications already exist with the same name, an exception is raised.

Parameters:
  • source_dataset_id (int) – The ID of the dataset to copy entries from

  • specification_names (str | Iterable[str] | None) – Names of the specifications to copy. If not provided, all specifications will be copied.

create_view(description, provenance, status=None, include=None, exclude=None, *, include_children=True)#

Creates a view of this dataset on the server

This function will return an InternalJob which can be used to watch for completion if desired. The job will run server side without user interaction.

Note the ID field of the object if you with to retrieve this internal job later (via get_internal_jobs() or PortalClient.get_internal_job)

Parameters:
  • description (str) – Optional string describing the view file

  • provenance (Dict[str, Any]) – Dictionary with any metadata or other information about the view. Information regarding the options used to create the view will be added.

  • status (Iterable[RecordStatusEnum] | None) – List of statuses to include. Default is to include records with any status

  • include (Iterable[str] | None) – List of specific record fields to include in the export. Default is to include most fields

  • exclude (Iterable[str] | None) – List of specific record fields to exclude from the export. Defaults to excluding none.

  • include_children (bool) – Specifies whether child records associated with the main records should also be included (recursively) in the view file.

Returns:

An InternalJob object which can be used to watch for completion.

Return type:

InternalJob

delete_attachment(file_id)#
Parameters:

file_id (int)

delete_entries(names, delete_records=False)#
Parameters:
  • names (str | Iterable[str])

  • delete_records (bool)

Return type:

DeleteMetadata

delete_specification(name, delete_records=False)#
Parameters:
  • name (str)

  • delete_records (bool)

Return type:

DeleteMetadata

detailed_status()#
Return type:

List[Tuple[str, str, RecordStatusEnum]]

download_attachment(attachment_id, destination_path=None, overwrite=True)#

Downloads an attachment

If destination path is not given, the file will be placed in the current directory, and the filename determined by what is stored on the server.

Parameters:
  • attachment_id (int) – ID of the attachment to download. See the attachments property

  • destination_path (str | None) – Full path to the destination file (including filename)

  • overwrite (bool) – If True, any existing file will be overwritten

download_view(view_file_id=None, destination_path=None, overwrite=True)#

Downloads a view for this dataset

If a view_file_id is not given, the most recent view will be downloaded.

If destination path is not given, the file will be placed in the current directory, and the filename determined by what is stored on the server.

Parameters:
  • view_file_id (int | None) – ID of the view to download. See list_views(). If None, will download the latest view

  • destination_path (str | None) – Full path to the destination file (including filename)

  • overwrite (bool) – If True, any existing file will be overwritten

property entry_names: List[str]#
fetch_attachments()#
fetch_contributed_values()#
fetch_entries(entry_names=None, force_refetch=False)#

Fetches entry information from the remote server, storing it internally

By default, already-fetched entries will not be fetched again, unless force_refetch is True.

Parameters:
  • entry_names (str | Iterable[str] | None) – Names of entries to fetch. If None, fetch all entries

  • force_refetch (bool) – If true, fetch data from the server even if it already exists locally

Return type:

None

fetch_entry_names()#

Fetch all entry names from the remote server

These are fetched and then stored internally, and not returned.

Return type:

None

fetch_records(entry_names=None, specification_names=None, status=None, include=None, fetch_updated=True, force_refetch=False)#

Fetches record information from the remote server, storing it internally

By default, this function will only fetch records that have not been fetch previously. If force_refetch is True, then this will always fetch the records.

Parameters:
  • entry_names (str | Iterable[str] | None) – Names of the entries whose records to fetch. If None, fetch all entries

  • specification_names (str | Iterable[str] | None) – Names of the specifications whose records to fetch. If None, fetch all specifications

  • status (RecordStatusEnum | Iterable[RecordStatusEnum] | None) – Fetch only records with these statuses

  • include (Iterable[str] | None) – Additional fields to include in the returned record

  • fetch_updated (bool) – Fetch any records that exist locally but have been updated on the server

  • force_refetch (bool) – If true, fetch data from the server even if it already exists locally

fetch_specification_names()#

Fetch all entry names from the remote server

These are fetched and then stored internally, and not returned.

Return type:

None

fetch_specifications(specification_names=None, force_refetch=False)#

Fetch specifications from the remote server, storing them internally

Parameters:
  • specification_names (str | Iterable[str] | None) – Names of specifications to fetch. If None, fetch all specifications

  • force_refetch (bool) – If true, fetch data from the server even if it already exists locally

Return type:

None

get_entry(entry_name, force_refetch=False)#

Obtain entry information

The entry will be automatically fetched from the remote server if needed.

Parameters:
  • entry_name (str)

  • force_refetch (bool)

Return type:

Any | None

get_internal_job(job_id)#
Parameters:

job_id (int)

Return type:

InternalJob

get_properties_df(properties_list)#

Retrieve a DataFrame populated with the specified properties from dataset records.

This function uses the provided list of property names to extract corresponding values from each record’s properties. It returns a DataFrame where rows represent each record. Each column corresponds has a top level index as a specification, and a second level index as the appropriate value name. Columns with all NaN values are dropped.

Parameters:#

properties_list

List of property names to retrieve from the records.

Returns:#

pandas.DataFrame

A DataFrame populated with the specified properties for each record.

Parameters:

properties_list (Sequence[str])

Return type:

DataFrame

get_record(entry_name, specification_name, include=None, fetch_updated=True, force_refetch=False)#

Obtain a calculation record related to this dataset

The record will be automatically fetched from the remote server if needed. If a record does not exist for this entry and specification, None is returned

Parameters:
  • entry_name (str)

  • specification_name (str)

  • include (Iterable[str] | None)

  • fetch_updated (bool)

  • force_refetch (bool)

Return type:

BaseRecord | None

classmethod get_subclass(dataset_type)#
Parameters:

dataset_type (str)

invalidate_records(entry_names=None, specification_names=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • refetch_records (bool)

property is_view: bool#
iterate_entries(entry_names=None, force_refetch=False)#

Iterate over all entries

This is used as a generator, and automatically fetches entries as needed

Parameters:
  • entry_names (str | Iterable[str] | None) – Names of entries to iterate over. If None, iterate over all entries

  • force_refetch (bool) – If true, fetch data from the server even if it already exists locally

iterate_records(entry_names=None, specification_names=None, status=None, include=None, fetch_updated=True, force_refetch=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • status (RecordStatusEnum | Iterable[RecordStatusEnum] | None)

  • include (Iterable[str] | None)

  • fetch_updated (bool)

  • force_refetch (bool)

list_internal_jobs(status=None)#
Parameters:

status (InternalJobStatusEnum | Iterable[InternalJobStatusEnum] | None)

Return type:

List[InternalJob]

list_views()#
modify_entries(attribute_map=None, comment_map=None, overwrite_attributes=False)#
Parameters:
  • attribute_map (Dict[str, Dict[str, Any]] | None)

  • comment_map (Dict[str, str] | None)

  • overwrite_attributes (bool)

modify_records(entry_names=None, specification_names=None, new_tag=None, new_priority=None, new_comment=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • new_tag (str | None)

  • new_priority (PriorityEnum | None)

  • new_comment (str | None)

  • refetch_records (bool)

property offline: bool#
preload_cache(view_file_id=None)#

Downloads a view file and uses it as the current cache

Parameters:

view_file_id (int | None) – ID of the view to download. See list_views(). If None, will download the latest view

print_status()#
Return type:

None

propagate_client(client)#

Propagates a client to this record to any fields within this record that need it

This may also be called from derived class propagate_client functions as well

property record_count: int#
refresh_cache(entry_names=None, specification_names=None)#

Refreshes some information in the cache with information on the server

This can be used to fix some inconsistencies in the cache without deleting and starting over. For example, this can fix instances where the record attached to a given entry & specification has changed (new record id) due to renaming specifications and entries, or via remove_records followed by a submit without duplicate checking.

This will also fetch any updated records

Parameters:
  • entry_names (str | Iterable[str] | None) – Names of the entries whose records to fetch. If None, fetch all entries

  • specification_names (str | Iterable[str] | None) – Names of the specifications whose records to fetch. If None, fetch all specifications

remove_records(entry_names=None, specification_names=None, delete_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • delete_records (bool)

Return type:

DeleteMetadata

rename_entries(name_map)#
Parameters:

name_map (Dict[str, str])

rename_specification(old_name, new_name)#
Parameters:
  • old_name (str)

  • new_name (str)

reset_records(entry_names=None, specification_names=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • refetch_records (bool)

set_default_priority(new_default_priority)#
Parameters:

new_default_priority (PriorityEnum)

set_default_tag(new_default_tag)#
Parameters:

new_default_tag (str)

set_description(new_description)#
Parameters:

new_description (str)

set_group(new_group)#
Parameters:

new_group (str)

set_metadata(new_metadata)#
Parameters:

new_metadata (Dict[str, Any])

set_name(new_name)#
Parameters:

new_name (str)

set_provenance(new_provenance)#
Parameters:

new_provenance (Dict[str, Any])

set_tagline(new_tagline)#
Parameters:

new_tagline (str)

set_tags(new_tags)#
Parameters:

new_tags (List[str])

set_visibility(new_visibility)#
Parameters:

new_visibility (bool)

property specification_names: List[str]#
property specifications: Mapping[str, Any]#
status()#
Return type:

Dict[str, Any]

status_table()#

Returns the status of the dataset’s computations as a table (in a string)

Return type:

str

submit(entry_names=None, specification_names=None, tag=None, priority=None, find_existing=True)#

Create records for this dataset

This function actually populates the datasets records given the entry and specification information.

Parameters:
  • entry_names (str | Iterable[str] | None) – Submit only records for these entries

  • specification_names (str | Iterable[str] | None) – Submit only records for these specifications

  • tag (str | None) – Use this tag for submissions (overrides the dataset default tag)

  • priority (PriorityEnum) – Use this tag for submissions (overrides the dataset default priority)

  • find_existing (bool) – If True, the database will be searched for existing records that match the requested calculations, and new records created for those that don’t match. If False, new records will always be created.

Return type:

InsertCountsMetadata

uncancel_records(entry_names=None, specification_names=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • refetch_records (bool)

uninvalidate_records(entry_names=None, specification_names=None, *, refetch_records=False)#
Parameters:
  • entry_names (str | Iterable[str] | None)

  • specification_names (str | Iterable[str] | None)

  • refetch_records (bool)

use_view_cache(view_file_path)#

Loads a vuew for this dataset as a cache file

Parameters:

view_file_path (str) – Full path to the view file

field id: int [Required]#
field name: str [Required]#
field description: str [Required]#
field tagline: str [Required]#
field tags: List[str] [Required]#
field group: str [Required]#
field visibility: bool [Required]#
field provenance: Dict[str, Any] [Required]#
field default_tag: str [Required]#
field default_priority: PriorityEnum [Required]#
field owner_user: str | None = None#
field owner_group: str | None = None#
field metadata: Dict[str, Any] [Required]#
field extras: Dict[str, Any] [Required]#
field contributed_values_: Dict[str, ContributedValues] | None = None (alias 'contributed_values')#
field attachments_: List[DatasetAttachment] | None = None (alias 'attachments')#
field auto_fetch_missing: bool = True#