NEB Records#
- pydantic model NEBKeywords[source]#
Bases:
BaseModel
NEBRecord options
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field images: int = 11#
Number of images that will be used to locate a rough transition state structure.
- Constraints:
exclusiveMinimum = 5
- Validated by:
- field spring_type: int = 0#
0: Nudged Elastic Band (parallel spring force + perpendicular gradients) 1: Hybrid Elastic Band (full spring force + perpendicular gradients) 2: Plain Elastic Band (full spring force + full gradients)
- Validated by:
- field maximum_force: float = 0.05#
Convergence criteria. Converge when maximum RMS-gradient (ev/Ang) of the chain fall below maximum_force.
- Validated by:
- field average_force: float = 0.025#
Convergence criteria. Converge when average RMS-gradient (ev/Ang) of the chain fall below average_force.
- Validated by:
- field maximum_cycle: int = 100#
Maximum iteration number for NEB calculation.
- Validated by:
- field optimize_ts: bool = False#
Setting it equal to true will perform a transition sate optimization starting with the guessed transition state structure from the NEB calculation result.
- Validated by:
- field optimize_endpoints: bool = False#
Setting it equal to True will optimize two end points of the initial chain before starting NEB.
- Validated by:
- field align: bool = True#
Align the images before starting the NEB calculation.
- Validated by:
- field epsilon: float = 1e-05#
Small eigenvalue threshold for resetting Hessian.
- Validated by:
- pydantic model NEBSpecification[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field program: ConstrainedStrValue = 'geometric'#
- field singlepoint_specification: QCSpecification [Required]#
- field optimization_specification: OptimizationSpecification | None = None#
- field keywords: NEBKeywords [Required]#
- pydantic model NEBOptimization[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- field optimization_id: int [Required]#
- field position: int [Required]#
- field ts: bool [Required]#
- pydantic model NEBSinglepoint[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field singlepoint_id: int [Required]#
- field chain_iteration: int [Required]#
- field position: int [Required]#
- pydantic model NEBAddBody[source]#
Bases:
RecordAddBodyBase
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
validate_assignment: bool = True
- Fields:
- field specification: NEBSpecification [Required]#
- field initial_chains: List[List[int | Molecule]] [Required]#
- field tag: constr(to_lower=True) [Required]#
- field priority: PriorityEnum [Required]#
- field owner_group: str | None = None#
- field find_existing: bool = True#
- pydantic model NEBQueryFilters[source]#
Bases:
RecordQueryFilters
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
validate_assignment: bool = True
- Fields:
- Validators:
_convert_basis
ยปqc_basis
parse_dates
ยปcreated_after
parse_dates
ยปcreated_before
parse_dates
ยปmodified_after
parse_dates
ยปmodified_before
validate_lists
ยปcursor
validate_lists
ยปlimit
- field program: List[str] | None = 'geometric'#
- field qc_program: List[constr(to_lower=True)] | None = None#
- field qc_method: List[constr(to_lower=True)] | None = None#
- field qc_basis: List[constr(to_lower=True) | None] | None = None#
- Validated by:
_convert_basis
- field molecule_id: List[int] | None = None#
- validator parse_dates ยป modified_after, created_before, created_after, modified_before#
- field record_id: List[int] | None = None#
- field record_type: List[str] | None = None#
- field manager_name: List[str] | None = None#
- field status: List[RecordStatusEnum] | None = None#
- field dataset_id: List[int] | None = None#
- field parent_id: List[int] | None = None#
- field child_id: List[int] | None = None#
- field created_before: datetime | None = None#
- Validated by:
parse_dates
- field created_after: datetime | None = None#
- Validated by:
parse_dates
- field modified_before: datetime | None = None#
- Validated by:
parse_dates
- field modified_after: datetime | None = None#
- Validated by:
parse_dates
- field owner_user: List[int | str] | None = None#
- field owner_group: List[int | str] | None = None#
- field limit: int | None = None#
- Validated by:
validate_lists
- field cursor: int | None = None#
- Validated by:
validate_lists
- pydantic model NEBRecord[source]#
Bases:
BaseRecord
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
allow_mutation: bool = True
extra: Extra = Extra.forbid
validate_assignment: bool = True
- Fields:
comments_ (List[qcportal.record_models.RecordComment] | None)
compute_history_ (List[qcportal.record_models.ComputeHistory] | None)
initial_chain_ (List[qcelemental.models.molecule.Molecule] | None)
native_files_ (Dict[str, qcportal.record_models.NativeFile] | None)
optimizations_ (Dict[str, qcportal.neb.record_models.NEBOptimization] | None)
singlepoints_ (List[qcportal.neb.record_models.NEBSinglepoint] | None)
- Validators:
_validate_extras
ยปextras
- field record_type: Literal['neb'] = 'neb'#
- field specification: NEBSpecification [Required]#
- field initial_chain_molecule_ids_: List[int] | None = None (alias 'initial_chain_molecule_ids')#
- field singlepoints_: List[NEBSinglepoint] | None = None (alias 'singlepoints')#
- field optimizations_: Dict[str, NEBOptimization] | None = None (alias 'optimizations')#
- field neb_result_: Molecule | None = None (alias 'neb_result')#
- field initial_chain_: List[Molecule] | None = None (alias 'initial_chain')#
- propagate_client(client)[source]#
Propagates a client and related information to this record to any fields within this record that need it
This is expected to be called from derived class propagate_client functions as well
- property initial_chain: List[Molecule]#
- property final_chain: List[SinglepointRecord]#
- property singlepoints: Dict[int, List[SinglepointRecord]]#
- property result#
- property optimizations: Dict[str, OptimizationRecord] | None#
- property ts_optimization: OptimizationRecord | None#
- property ts_hessian: SinglepointRecord | None#
- property children_errors: List[BaseRecord]#
Returns errored child records
- property children_status: Dict[RecordStatusEnum, int]#
Returns a dictionary of the status of all children of this record
- property comments: List[RecordComment] | None#
- property compute_history: List[ComputeHistory]#
- property error: Dict[str, Any] | None#
- fetch_children(include=None, force_fetch=False)#
Fetches all children of this record recursively
- Parameters:
include (Iterable[str] | None)
force_fetch (bool)
- classmethod fetch_children_multi(records, include=None, force_fetch=False)#
Fetches all children of the given records
This tries to work efficiently, fetching larger batches of children that can span multiple records
- Parameters:
records (Iterable[BaseRecord | None])
include (Iterable[str] | None)
force_fetch (bool)
- classmethod get_subclass(record_type)#
Obtain a subclass of this class given its record_type
- Parameters:
record_type (str)
- Return type:
Type[BaseRecord]
- get_waiting_reason()#
- Return type:
Dict[str, Any]
- property native_files: Dict[str, NativeFile] | None#
- property offline: bool#
- property provenance: Provenance | None#
- property service: RecordService | None#
- property stderr: str | None#
- property stdout: str | None#
- sync_to_cache(detach=False)#
Syncs this record to the cache
If detach is True, then the record will be removed from the cache
- Parameters:
detach (bool)
- property task: RecordTask | None#
- field id: int [Required]#
- field is_service: bool [Required]#
- field properties: Dict[str, Any] | None = None#
- field extras: Dict[str, Any] = {}#
- Validated by:
_validate_extras
- field status: RecordStatusEnum [Required]#
- field manager_name: str | None = None#
- field created_on: datetime [Required]#
- field modified_on: datetime [Required]#
- field owner_user: str | None = None#
- field owner_group: str | None = None#
- field compute_history_: List[ComputeHistory] | None = None (alias 'compute_history')#
- field task_: RecordTask | None = None (alias 'task')#
- field service_: RecordService | None = None (alias 'service')#
- field comments_: List[RecordComment] | None = None (alias 'comments')#
- field native_files_: Dict[str, NativeFile] | None = None (alias 'native_files')#
- pydantic model NEBDatasetNewEntry[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field name: str [Required]#
- field initial_chain: List[int | Molecule] [Required]#
- field additional_keywords: Dict[str, Any] = {}#
- field additional_singlepoint_keywords: Dict[str, Any] = {}#
- field attributes: Dict[str, Any] = {}#
- field comment: str | None = None#
- pydantic model NEBDatasetEntry[source]#
Bases:
NEBDatasetNewEntry
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field initial_chain: List[Molecule] [Required]#
- field name: str [Required]#
- field additional_keywords: Dict[str, Any] = {}#
- field additional_singlepoint_keywords: Dict[str, Any] = {}#
- field attributes: Dict[str, Any] = {}#
- field comment: str | None = None#
- pydantic model NEBDatasetSpecification[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field name: str [Required]#
- field specification: NEBSpecification [Required]#
- field description: str | None = None#
- pydantic model NEBDatasetRecordItem[source]#
Bases:
BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
extra: Extra = Extra.forbid
- Fields:
- field entry_name: str [Required]#
- field specification_name: str [Required]#
- field record_id: int [Required]#
- pydantic model NEBDataset[source]#
Bases:
BaseDataset
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Config:
allow_mutation: bool = True
extra: Extra = Extra.forbid
validate_assignment: bool = True
- Fields:
- field dataset_type: Literal['neb'] = 'neb'#
- add_specification(name, specification, description=None)[source]#
- Parameters:
name (str)
specification (NEBSpecification)
description (str | None)
- Return type:
- add_entries(entries)[source]#
- Parameters:
entries (NEBDatasetNewEntry | Iterable[NEBDatasetNewEntry])
- Return type:
- add_entry(name, initial_chain, additional_keywords=None, additional_singlepoint_keywords=None, attributes=None, comment=None)[source]#
- Parameters:
name (str)
initial_chain (List[int | Molecule])
additional_keywords (Dict[str, Any] | None)
additional_singlepoint_keywords (Dict[str, Any] | None)
attributes (Dict[str, Any] | None)
comment (str | None)
- assert_is_not_view()#
- assert_online()#
- property attachments: List[DatasetAttachment]#
- background_submit(entry_names=None, specification_names=None, tag=None, priority=None, find_existing=True)#
Adds a dataset submission internal job to the server
This internal job is the one to actually do the submission, which can take a while.
You can check the progress of the internal job using the return object.
See
submit()
for info on the function parameters.- Returns:
An internal job object that can be watch or used to determine the progress of the job.
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
tag (str | None)
priority (PriorityEnum)
find_existing (bool)
- Return type:
- cancel_records(entry_names=None, specification_names=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
refetch_records (bool)
- compile_values(value_call, value_names='value', entry_names=None, specification_names=None, unpack=False)#
Compile values from records into a pandas DataFrame.
- Parameters:
value_call (Callable) โ Function to call on each record to extract the desired value. Must return a scalar value or a sequence of values if โunpackโ is set to True.
value_names (Union[Sequence[str], str]) โ Column name(s) for the extracted value(s). If a string is provided and multiple values are returned by โvalue_callโ, columns are named by appending an index to this string. If a list of strings is provided, it must match the length of the sequence returned by โvalue_callโ. Default is โvalueโ.
entry_names (Optional[Union[str, Iterable[str]]]) โ Entry names to filter records. If not provided, considers all entries.
specification_names (Optional[Union[str, Iterable[str]]]) โ Specification names to filter records. If not provided, considers all specifications.
unpack (bool) โ If True, unpack the sequence of values returned by โvalue_callโ into separate columns. Default is False.
- Returns:
A multi-index DataFrame where each row corresponds to an entry. Each column corresponds has a top level index as a specification, and a second level index as the appropriate value name. Values are extracted from records using โvalue_callโ.
- Return type:
pandas.DataFrame
- Raises:
ValueError โ If the length of โvalue_namesโ does not match the number of values returned by โvalue_callโ when โunpackโ is set to True.
Notes
The DataFrame is structured such that the rows are entries and columns are specifications.
2. If โunpackโ is True, the function assumes โvalue_callโ returns a sequence of values that need to be distributed across columns in the resulting DataFrame. โvalue_callโ should always return the same number of values for each record if unpack is True.
- property computed_properties#
- property contributed_values: Dict[str, ContributedValues]#
- copy_entries_from(source_dataset_id, entry_names=None)#
Copies entries from another dataset into this one
If entries already exist with the same name, an exception is raised.
- Parameters:
source_dataset_id (int) โ The ID of the dataset to copy entries from
entry_names (str | Iterable[str] | None) โ Names of the entries to copy. If not provided, all entries will be copied.
- copy_records_from(source_dataset_id, entry_names=None, specification_names=None)#
Copies records from another dataset into this one
Entries and specifications will also be copied. If entries or specifications already exist with the same name, an exception is raised.
This does not actually fully copy records - the records will be linked to both datasets
- Parameters:
source_dataset_id (int) โ The ID of the dataset to copy entries from
entry_names (str | Iterable[str] | None) โ Names of the entries to copy. If not provided, all entries will be copied.
specification_names (str | Iterable[str] | None) โ Names of the specifications to copy. If not provided, all specifications will be copied.
- copy_specifications_from(source_dataset_id, specification_names=None)#
Copies specifications from another dataset into this one
If specifications already exist with the same name, an exception is raised.
- Parameters:
source_dataset_id (int) โ The ID of the dataset to copy entries from
specification_names (str | Iterable[str] | None) โ Names of the specifications to copy. If not provided, all specifications will be copied.
- create_view(description, provenance, status=None, include=None, exclude=None, *, include_children=True)#
Creates a view of this dataset on the server
This function will return an
InternalJob
which can be used to watch for completion if desired. The job will run server side without user interaction.Note the ID field of the object if you with to retrieve this internal job later (via
get_internal_jobs()
orPortalClient.get_internal_job
)- Parameters:
description (str) โ Optional string describing the view file
provenance (Dict[str, Any]) โ Dictionary with any metadata or other information about the view. Information regarding the options used to create the view will be added.
status (Iterable[RecordStatusEnum] | None) โ List of statuses to include. Default is to include records with any status
include (Iterable[str] | None) โ List of specific record fields to include in the export. Default is to include most fields
exclude (Iterable[str] | None) โ List of specific record fields to exclude from the export. Defaults to excluding none.
include_children (bool) โ Specifies whether child records associated with the main records should also be included (recursively) in the view file.
- Returns:
An
InternalJob
object which can be used to watch for completion.- Return type:
- delete_attachment(file_id)#
- Parameters:
file_id (int)
- delete_entries(names, delete_records=False)#
- Parameters:
names (str | Iterable[str])
delete_records (bool)
- Return type:
- delete_specification(name, delete_records=False)#
- Parameters:
name (str)
delete_records (bool)
- Return type:
- detailed_status()#
- Return type:
List[Tuple[str, str, RecordStatusEnum]]
- download_attachment(attachment_id, destination_path=None, overwrite=True)#
Downloads an attachment
If destination path is not given, the file will be placed in the current directory, and the filename determined by what is stored on the server.
- Parameters:
attachment_id (int) โ ID of the attachment to download. See the attachments property
destination_path (str | None) โ Full path to the destination file (including filename)
overwrite (bool) โ If True, any existing file will be overwritten
- download_view(view_file_id=None, destination_path=None, overwrite=True)#
Downloads a view for this dataset
If a view_file_id is not given, the most recent view will be downloaded.
If destination path is not given, the file will be placed in the current directory, and the filename determined by what is stored on the server.
- Parameters:
view_file_id (int | None) โ ID of the view to download. See
list_views()
. If None, will download the latest viewdestination_path (str | None) โ Full path to the destination file (including filename)
overwrite (bool) โ If True, any existing file will be overwritten
- property entry_names: List[str]#
- fetch_attachments()#
- fetch_contributed_values()#
- fetch_entries(entry_names=None, force_refetch=False)#
Fetches entry information from the remote server, storing it internally
By default, already-fetched entries will not be fetched again, unless force_refetch is True.
- Parameters:
entry_names (str | Iterable[str] | None) โ Names of entries to fetch. If None, fetch all entries
force_refetch (bool) โ If true, fetch data from the server even if it already exists locally
- Return type:
None
- fetch_entry_names()#
Fetch all entry names from the remote server
These are fetched and then stored internally, and not returned.
- Return type:
None
- fetch_records(entry_names=None, specification_names=None, status=None, include=None, fetch_updated=True, force_refetch=False)#
Fetches record information from the remote server, storing it internally
By default, this function will only fetch records that have not been fetch previously. If force_refetch is True, then this will always fetch the records.
- Parameters:
entry_names (str | Iterable[str] | None) โ Names of the entries whose records to fetch. If None, fetch all entries
specification_names (str | Iterable[str] | None) โ Names of the specifications whose records to fetch. If None, fetch all specifications
status (RecordStatusEnum | Iterable[RecordStatusEnum] | None) โ Fetch only records with these statuses
include (Iterable[str] | None) โ Additional fields to include in the returned record
fetch_updated (bool) โ Fetch any records that exist locally but have been updated on the server
force_refetch (bool) โ If true, fetch data from the server even if it already exists locally
- fetch_specification_names()#
Fetch all entry names from the remote server
These are fetched and then stored internally, and not returned.
- Return type:
None
- fetch_specifications(specification_names=None, force_refetch=False)#
Fetch specifications from the remote server, storing them internally
- Parameters:
specification_names (str | Iterable[str] | None) โ Names of specifications to fetch. If None, fetch all specifications
force_refetch (bool) โ If true, fetch data from the server even if it already exists locally
- Return type:
None
- get_entry(entry_name, force_refetch=False)#
Obtain entry information
The entry will be automatically fetched from the remote server if needed.
- Parameters:
entry_name (str)
force_refetch (bool)
- Return type:
Any | None
- get_internal_job(job_id)#
- Parameters:
job_id (int)
- Return type:
- get_properties_df(properties_list)#
Retrieve a DataFrame populated with the specified properties from dataset records.
This function uses the provided list of property names to extract corresponding values from each recordโs properties. It returns a DataFrame where rows represent each record. Each column corresponds has a top level index as a specification, and a second level index as the appropriate value name. Columns with all NaN values are dropped.
Parameters:#
- properties_list
List of property names to retrieve from the records.
Returns:#
- pandas.DataFrame
A DataFrame populated with the specified properties for each record.
- Parameters:
properties_list (Sequence[str])
- Return type:
DataFrame
- get_record(entry_name, specification_name, include=None, fetch_updated=True, force_refetch=False)#
Obtain a calculation record related to this dataset
The record will be automatically fetched from the remote server if needed. If a record does not exist for this entry and specification, None is returned
- Parameters:
entry_name (str)
specification_name (str)
include (Iterable[str] | None)
fetch_updated (bool)
force_refetch (bool)
- Return type:
BaseRecord | None
- classmethod get_subclass(dataset_type)#
- Parameters:
dataset_type (str)
- invalidate_records(entry_names=None, specification_names=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
refetch_records (bool)
- property is_view: bool#
- iterate_entries(entry_names=None, force_refetch=False)#
Iterate over all entries
This is used as a generator, and automatically fetches entries as needed
- Parameters:
entry_names (str | Iterable[str] | None) โ Names of entries to iterate over. If None, iterate over all entries
force_refetch (bool) โ If true, fetch data from the server even if it already exists locally
- iterate_records(entry_names=None, specification_names=None, status=None, include=None, fetch_updated=True, force_refetch=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
status (RecordStatusEnum | Iterable[RecordStatusEnum] | None)
include (Iterable[str] | None)
fetch_updated (bool)
force_refetch (bool)
- list_internal_jobs(status=None)#
- Parameters:
status (InternalJobStatusEnum | Iterable[InternalJobStatusEnum] | None)
- Return type:
List[InternalJob]
- list_views()#
- modify_entries(attribute_map=None, comment_map=None, overwrite_attributes=False)#
- Parameters:
attribute_map (Dict[str, Dict[str, Any]] | None)
comment_map (Dict[str, str] | None)
overwrite_attributes (bool)
- modify_records(entry_names=None, specification_names=None, new_tag=None, new_priority=None, new_comment=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
new_tag (str | None)
new_priority (PriorityEnum | None)
new_comment (str | None)
refetch_records (bool)
- property offline: bool#
- preload_cache(view_file_id=None)#
Downloads a view file and uses it as the current cache
- Parameters:
view_file_id (int | None) โ ID of the view to download. See
list_views()
. If None, will download the latest view
- print_status()#
- Return type:
None
- propagate_client(client)#
Propagates a client to this record to any fields within this record that need it
This may also be called from derived class propagate_client functions as well
- property record_count: int#
- refresh_cache(entry_names=None, specification_names=None)#
Refreshes some information in the cache with information on the server
This can be used to fix some inconsistencies in the cache without deleting and starting over. For example, this can fix instances where the record attached to a given entry & specification has changed (new record id) due to renaming specifications and entries, or via remove_records followed by a submit without duplicate checking.
This will also fetch any updated records
- Parameters:
entry_names (str | Iterable[str] | None) โ Names of the entries whose records to fetch. If None, fetch all entries
specification_names (str | Iterable[str] | None) โ Names of the specifications whose records to fetch. If None, fetch all specifications
- remove_records(entry_names=None, specification_names=None, delete_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
delete_records (bool)
- Return type:
- rename_entries(name_map)#
- Parameters:
name_map (Dict[str, str])
- rename_specification(old_name, new_name)#
- Parameters:
old_name (str)
new_name (str)
- reset_records(entry_names=None, specification_names=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
refetch_records (bool)
- set_default_priority(new_default_priority)#
- Parameters:
new_default_priority (PriorityEnum)
- set_default_tag(new_default_tag)#
- Parameters:
new_default_tag (str)
- set_description(new_description)#
- Parameters:
new_description (str)
- set_group(new_group)#
- Parameters:
new_group (str)
- set_metadata(new_metadata)#
- Parameters:
new_metadata (Dict[str, Any])
- set_name(new_name)#
- Parameters:
new_name (str)
- set_provenance(new_provenance)#
- Parameters:
new_provenance (Dict[str, Any])
- set_tagline(new_tagline)#
- Parameters:
new_tagline (str)
- set_tags(new_tags)#
- Parameters:
new_tags (List[str])
- set_visibility(new_visibility)#
- Parameters:
new_visibility (bool)
- property specification_names: List[str]#
- property specifications: Mapping[str, Any]#
- status()#
- Return type:
Dict[str, Any]
- status_table()#
Returns the status of the datasetโs computations as a table (in a string)
- Return type:
str
- submit(entry_names=None, specification_names=None, tag=None, priority=None, find_existing=True)#
Create records for this dataset
This function actually populates the datasets records given the entry and specification information.
- Parameters:
entry_names (str | Iterable[str] | None) โ Submit only records for these entries
specification_names (str | Iterable[str] | None) โ Submit only records for these specifications
tag (str | None) โ Use this tag for submissions (overrides the dataset default tag)
priority (PriorityEnum) โ Use this tag for submissions (overrides the dataset default priority)
find_existing (bool) โ If True, the database will be searched for existing records that match the requested calculations, and new records created for those that donโt match. If False, new records will always be created.
- Return type:
- uncancel_records(entry_names=None, specification_names=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
refetch_records (bool)
- uninvalidate_records(entry_names=None, specification_names=None, *, refetch_records=False)#
- Parameters:
entry_names (str | Iterable[str] | None)
specification_names (str | Iterable[str] | None)
refetch_records (bool)
- use_view_cache(view_file_path)#
Loads a vuew for this dataset as a cache file
- Parameters:
view_file_path (str) โ Full path to the view file
- field id: int [Required]#
- field name: str [Required]#
- field description: str [Required]#
- field tagline: str [Required]#
- field tags: List[str] [Required]#
- field group: str [Required]#
- field visibility: bool [Required]#
- field provenance: Dict[str, Any] [Required]#
- field default_tag: str [Required]#
- field default_priority: PriorityEnum [Required]#
- field owner_user: str | None = None#
- field owner_group: str | None = None#
- field metadata: Dict[str, Any] [Required]#
- field extras: Dict[str, Any] [Required]#
- field contributed_values_: Dict[str, ContributedValues] | None = None (alias 'contributed_values')#
- field attachments_: List[DatasetAttachment] | None = None (alias 'attachments')#
- field auto_fetch_missing: bool = True#